A Short Primer On “extra_requires“ in setup.py

To include optional installation capabilities in your Python module’s setup.py file, you can use the extras_require parameter. The extras_require parameter allows you to define groups of optional dependencies that users can install by specifying an extra name when running pip install.

Here’s an example setup.py file that includes an optional dependency group for running tests:

from setuptools import setup, find_packages

setup(
    name='mymodule',
    version='0.1.0',
    description='My awesome module',
    packages=find_packages(),
    install_requires=[
        # Required dependencies go here
        'numpy',
        'pandas',
    ],
    extras_require={
        'test': [
            # Optional dependencies for testing go here
            'pytest',
            'coverage',
        ]
    },
)

In this example, the install_requires parameter lists the required dependencies for your module, which are required for installation regardless of which optional dependency groups are installed.

The extras_require parameter defines an optional dependency group called test, which includes the pytest and coverage packages. Users can install these packages by running pip install mymodule[test].

One can define multiple optional dependency groups by adding additional keys to the extras_require dictionary.

Using optional dependencies with the extras_require parameter in your Python module’s setup.py file has several advantages:

  • It allows users to install only the dependencies they need: By defining optional dependency groups, users can choose which additional dependencies to install based on their needs. This can help to reduce the amount of disk space used and minimize potential conflicts between packages.
  • It makes your module more flexible: By offering optional dependency groups, your module becomes more flexible and can be used in a wider range of contexts. Users can customize their installation to fit their needs, which can improve the overall user experience.
  • It simplifies dependency management: By clearly defining which dependencies are required and which are optional, you can simplify dependency management for your module. This can make it easier for users to understand what they need to install and help to prevent dependency-related issues.
  • It can improve module performance: By offering optional dependencies, you can optimize your module’s performance for different use cases. For example, you can include additional packages for visualization or data processing that are only needed in certain scenarios. This can help to improve performance and reduce memory usage for users who don’t need these features.

Related

Advertisement

Monitor Your Raspberry Pi with Flask: Free Disk Space and Latest File

Are you tired of manually checking your Raspberry Pi’s disk space and latest files? With a few lines of Python code and the Flask web framework, you can create a simple application that monitors your Raspberry Pi for you.

In this post, we will walk through the code that monitors the free disk space on your Raspberry Pi and returns the latest modified file in a specified folder.

To get started, we need to install the Flask and psutil libraries.

Once you have the dependencies installed, create a new Python file and copy the following code:

from flask import Flask
import os
import psutil

app = Flask(__name__)

@app.route("/disk-space")
def disk_space():
    disk = psutil.disk_usage("/")
    free = disk.free // (1024 * 1024)
    return str(free) + " MB"

@app.route("/file")
def get_recent_file():
    folder = "/home/pi/Documents"
    files = os.listdir(folder)
    files.sort(key=lambda x: os.path.getmtime(os.path.join(folder, x)))
    recent_file = files[-1]
    return recent_file

if __name__ == '__main__':
    app.run(port=8989)

Let’s break down this code.

First, we import the Flask, os, and psutil libraries. Flask is the web framework that we will use to create the application. The os library provides a way to interact with the Raspberry Pi’s file system. Psutil is a cross-platform library for retrieving system information.

Next, we create a new Flask application instance and define two routes: /disk-space and /file.

The /disk-space route uses the psutil library to obtain the amount of free disk space on the Raspberry Pi’s root file system (“/”). The value is converted to megabytes and returned as a string.

The /file route lists all files in the specified folder (in this case, the “Documents” folder in the Raspberry Pi user’s home directory) and returns the name of the most recently modified file. The files are sorted based on their modification time using os.path.getmtime.

Finally, we start the Flask application on port 8989.

To run this application on your Raspberry Pi, save the code to a file (e.g. app.py) and run the following command:

python app.py

This will start the Flask application, and you can access the routes by visiting http://:8989/disk-space and http://:8989/file in your web browser.

That’s it! With just a few lines of code, you can now monitor your Raspberry Pi’s free disk space and latest files.

You can easily modify this code to add more routes and functionality to suit your needs. Happy coding!

Related Posts

Travel the World from Your Desktop: How to Use Python to Switch Up Your Wallpaper

Are you tired of staring at the same old desktop background on your Windows laptop every day? Do you have a collection of beautiful travel pictures that you’d love to see on your screen instead? If you answered yes to both of these questions, then you’re in luck! In this post, I’ll show you how to create a Python script that changes your desktop background every 15 minutes using your favorite travel photos. I have done the same for my office laptop.

First, create a new folder on your laptop called “pics” and add your favorite travel pictures to it. You can use images from your own travels or download high-quality images from website of your choice.

Next, let’s create the Python script that will change your desktop background. Open up your favorite text editor and create a new file called “change_background.py”. Then, copy and paste the following code:

import ctypes
import os
from random import choice
import sched
import time

event_schedule = sched.scheduler(time.time, time.sleep)

SPI_SETDESKWALLPAPER = 20 

FOLDER=r"C:\Users\sukhbinder.singh\pics"
FILES = [os.path.join(FOLDER, f) for f in os.listdir(FOLDER)]

def change_wallpaper():
    ctypes.windll.user32.SystemParametersInfoW(20,0,choice(FILES),0)
    event_schedule.enter(choice([13,23,7,11,5])*60, 1, change_wallpaper, ())

if __name__ == "__main__":
    event_schedule.enter(10, 1, change_wallpaper, ())
    event_schedule.run()

Make sure to replace “FOLDER” with your actual path of your images.

This code uses the “ctypes” module to call the “SystemParametersInfoW” function from the Windows API, which changes the desktop background to a random image from the “pics” folder. The script then waits for random minutes before changing the background again.

Save the file and make sure it is in a directory that you can easily access. Now, let’s schedule the script to run automatically every time you start your laptop.

Open up the Windows Task Scheduler by searching for it in the Start menu. Click on “Create Basic Task” and follow the prompts to set up a new task.

When prompted to choose a program/script, browse to the location of your “change_background.py” file and select it. Set the trigger to “At Startup” and click “Finish” to complete the setup.

Now, every time you start your laptop, your Python script will automatically run in the background and change your desktop background every 15 minutes.

In conclusion, with just a few lines of Python code and the Windows Task Scheduler, you can turn your boring desktop background into a slideshow of your favorite travel photos. Give it a try and let me know how it goes!

Related More

Getting Battery Percentage in Windows with Python

Battery percentage is an important aspect of mobile devices, laptops, and other battery-powered electronic devices. It tells us how much energy the battery has , which is crucial in determining how long the device will last before needing to be recharged.

In this blog post, we will see how to get battery percent information in Windows using Python.

Using the psutil Library

The psutil library is a comprehensive library for retrieving information about the system and processes running on it. It provides a simple and straightforward way to access the battery percent information in Windows.

Here is an example code that demonstrates how to use psutil to get the battery percent information in Windows:

import psutil

battery = psutil.sensors_battery()
print("Battery Capacity:", battery.percent)

In the code above, we first import the psutil library. Then, we use the sensors_battery() function to get the battery capacity information.

This function returns a sensors_battery object, which contains several properties that provide information about the battery, such as the percent, power plugged, and others. In the above example, we print the percent of the battery.

Related Posts:

How to Enable CORS in Django

My Django learning app deployed on raspberrypi for kids was functioning smoothly when accessed from a home network. However, when we had to travel to Kolkata, and the app was promoted to a PythonAnywhere server. This move brought about a new challenge, as the app started to face issues with Cross-Origin Resource Sharing (CORS).

I soon realized that this was a common issue and could be easily resolved by enabling CORS in the Django app.

I followed the following simple process and soon had the app up and running smoothly again, with CORS enabled. The end.

Here’s how to enable CORS in Django:

  • Install the Django-Cors-Headers package by running python -m pip install Django-cors-headers
  • Add it to the INSTALLED_APPS list in your Django settings:
INSTALLED_APPS = [    ...    'corsheaders',    ...]
  • Add the CorsMiddleware class to the MIDDLEWARE list:
MIDDLEWARE = [    ...,    'corsheaders.middleware.CorsMiddleware',    'django.middleware.common.CommonMiddleware',    ...,]
  • Configure the CORS headers by setting the following variables in your Django settings:
CORS_ALLOW_ALL_ORIGINS = True
CORS_ALLOW_CREDENTIALS = True
CORS_ALLOWED_ORIGINS = [
    'http://localhost:1234',
]
CORS_ALLOWED_ORIGIN_REGEXES = [
    'http://localhost:1234',
]

Note: CORS_ALLOW_ALL_ORIGINS set to True allows all origins, while CORS_ALLOWED_ORIGINS only allows specific origins listed in the list.

And so, the journey with the Django app continued without any further hiccups and kids are still using the same for their spaced revision and review.

Handling Multiple Inputs with argparse in Python Scripts

argparse demo for multiple inputs

The problem.

ffmpeg allows multiple inputs to be specified using the same keyword, like this:

ffmpeg -i input1.mp4 -i input2.webm -i input3.mp4

Let’s say you are trying to write a script in python that accepts multiple input sources and does something with each one, as follows:

python_script -i input1.mp4 -i input2.webm -I input3.mp4

How do we do this in argparse?

Using argparse, you are facing an issue as each option flag can only be used once. You know how to associate multiple arguments with a single option (using nargs=’*’ or nargs=’+’), but that still won’t allow you to use the -i flag multiple times.

How can this be accomplished?

Here’s a sample code to accomplish what you need using argparse library

import argparse

parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input', action='append', type=str, help='input file name')

args = parser.parse_args()
inputs = args.input

# Process each input
for input in inputs:
    # Do something with the input
    print(f'Processing input: {input}')

With this code, the input can be passed as:

python_script.py -i input1.mp4 -i input2.webm -i input3.mp4

The key in the whole program is the phase “append” in the action keyword.

Hope this helps.

Learn more

How to Suppress Terminal Window For Python Scripts

In windows python scripts are executed by python.exe by default. This executable opens a terminal, which stays open even if the program uses a GUI.

What to do if you do not want this to happen?

Well use the extension .pyw. This will cause the script to be executed by pythonw.exe by default. pythonw.exe suppresses the terminal window on startup.

or

you can run your script using the pythonw.exe command like this

c:>pythonw.exe c:\scripts\predict_now.py

Hope this helps. Most of my automation/daily backup scripts on my office computer are running this way and do not leave a visible footprint on the taskbar

Read More

Export PowerPoint Slides with Python

A couple of years ago, I had this issue where I needed to export slides of powerpoint as png. There were a lot of them, so doing them manually was out of question, here’s a quick python script to export powerpoint slides to png.

import sys, win32com.client

class ApplicationEvents(object):
    def OnQuit(self):
        print("quitting")


spath = r"C:\Users\sukhbinder\Desktop\cool_presentation.pptx"

app = win32com.client.DispatchWithEvents("Powerpoint.Application", ApplicationEvents)
doc=app.Presentations.Open(spath,False)
doc.Export(r"C:\Users\sukhbinder\Downloads", "PNG")
doc.Close()

Hope this helps someone.

Some related posts

Example of Subparser/Sub-Commands with Argparse

I like argparse. yes there are many other utilities that have and make life easy but I am still a fan of argparse mostly because it’s part of the standard python installation. No other installs needed

Argparse is powerful too, if you have used, git you should have experienced the subcommands. Here’s how one can implement the same with argparse.

def main():

    
    parser = argparse.ArgumentParser(description="Jotter")
    subparser = parser.add_subparsers()

    log_p = subparser.add_parser("log")
    log_p.add_argument("text", type=str, nargs="*", default=None)
    log_p.set_defaults(func=log_com)

    show_p = subparser.add_parser("show")
    show_p.add_argument("--all", action="store_true")
    show_p.add_argument("--id", type=int, default=0)
    show_p.add_argument("-s", "--skip", type=int, default=0)
    show_p.add_argument("-l","--limit", type=int, default=100)
    show_p.set_defaults(func=show_com)

    search_p = subparser.add_parser("search")
    search_p.add_argument("search", type=str, default=None)
    search_p.add_argument("-limit", type=int, default=100)
    search_p.set_defaults(func=search_com)

    init()
    args = parser.parse_args()
    args.func(args)

In the above code jotter is our main command, it has other subcommands like jotter log, jotter show jotter search.

Have you used this before?

Some related posts

Automating Copying of Files from Raspberry Pi using Python

My Rasberry pi has just a 32GB memory card, so another issue I face with my timelapse automation is regularly copying the files from the raspberry pi to my laptop.

I have tried various options like git, secure copy (SCP), FTP, ssh etc All of them work but have their limitations.

But there is one system that I have finally stuck and works seamlessly. As again its implemented with python and used wget cmd-line tool

Here’s the code that lets me transfer the files from the raspberry pi to my laptop. I just run this on schedule on my mac every week.



from datetime import datetime, timedelta
import os
import subprocess
import argparse


BASE_URL = r"http://192.168.0.112:8000/Desktop/images/{}"



def get_dir(day=1, outfolder=r"/Users/sukhbindersingh/pyimages"):
    if day > 0:
        day = day * -1
    now =datetime.now()
    yesterday = now+timedelta(days=day)
    datestr = yesterday.strftime("%m_%d_%Y_")
    fname = "v_{}_overval.mp4".format(datestr)
    fname_src = BASE_URL.format(fname)
    cmdline = "wget {}".format(fname_src)
    print("downloading {}".format(fname_src))
    os.chdir(outfolder)
    iret = subprocess.call(cmdline.split())
    return iret


parser = argparse.ArgumentParser("download_video", description="Download raspberry pi videos")
parser.add_argument("-d", "--days",type=int,  help="No of backdays to download", default=1)
parser.add_argument("-o", "--outdir", type=str, help="Output dir where downloaded file will be kept", default=None)

args = parser.parse_args()

outfolder = args.outdir
if outfolder is None:
    outfolder = r"/Users/sukhbindersingh/pyimages"

for day in range(args.days):
    iret = get_dir(day+1, outfolder)



How will you solve this issue? Do you have another way that this can be solved? Do let me know in the comments.

Read related posts

Principal Component Analysis in pure Numpy

In 2009 I was working with principal component analysis PCA in my job. It was my first introduction to this topic, so I played with it in the office and at home in my spare time.

Python was my favourite play tool at that time. Stumbled upon this code that I wrote in 2013 as part of a personal project.

In case you are wondering what is PCA?

Principal component analysis (PCA) is a standard tool in modern data analysis and is used in many diverse fields from computer graphics, machine learning to neuroscience, because it is a simple, non-parametric method for extracting relevant information from enormous and confusing data sets.

With minimal effort PCA provides a map for how to reduce a complex data set to a lower dimension to reveal the sometimes hidden, simplified structures that often underlie it.

Shame I did not have GitHub then, or it would have been posted there, so here it goes.

# -*- coding: utf-8 -*-
"""
Created on Sun Jan 31 11:03:57 2013

@author: Sukhbinder
"""


import numpy as np

def pca1(x):
    """Determine the principal components of a vector of measurements
    

    Determine the principal components of a vector of measurements
    x should be a M x N numpy array composed of M observations of n variables

    PCA using covariance
    
    The output is:
    coeffs - the NxN correlation matrix that can be used to transform x into its components
    signals is MxN of projected data

    The code for this function is based on "A Tutorial on Principal Component
    Analysis", Shlens, 2005 http://www.snl.salk.edu/~shlens/pub/notes/pca.pdf
    (unpublished)
    """
    
    (M,N)  = x.shape
    Mean   = x.mean(0)
    y      = x - Mean
    cov    = np.dot(y.T,y) / (M-1)
    (V,PC) = np.linalg.eig(cov)
    
    order  = (-V).argsort()
    V=V[order]
    coeff  = PC[:,order]
    signals = np.dot(PC.T,y.T)
    return coeff,signals,V

    
def pca2(x):
    """Determine the principal components of a vector of measurements
    
    Determine the principal components of a vector of measurements
    x should be a M x N numpy array composed of M observations of n variables
    The output is:
    coeffs - the NxN correlation matrix that can be used to transform x into its components
    signals is MxN of projected data
    
    The code for this function is based on "A Tutorial on Principal Component
    Analysis", Shlens, 2005 http://www.snl.salk.edu/~shlens/pub/notes/pca.pdf
    (unpublished)
    """
    (M,N)  = x.shape
    Mean   = x.mean(0)
    y      = x - Mean
    yy = y.T/np.sqrt(M-1)
    u,s,pc = np.linalg.svd(yy)
    s=np.diag(s)
    v= np.dot(s,s)
    v=diag(v)
    signals = np.dot(pc.T,y)
    return pc,signals,v
 
    
    

scikit-learn etc and other libraries do have PCA so what was the need to write PCA code?

Well, I was trying to understand PCA deeply and I couldn’t use the library sklearn so this piece of code was written completely in numpy which helped me reduce the resolutions of my family pictures back in 2013 before google photos made this redundant. 🙂

Related Posts

Happy Independence Day

The Internet is amazing. I wrote a python program that does the above interactive animation almost ~9+ years ago to celebrate Independence Day and this is still available and working. Amazing. I am truly surprised.

To give it a try follow this link. Click run and then drag your mouse within the black screen.

Do give it a try, the gif is a poor rendition of what the program actually produces.

As always the code is available here too on this blog.

How to resolve this pandas ValueError: arrays must all be same length

Consider the following code.

import numpy as np
import pandas as pd

in_dict = dict(a=np.random.rand(3), b=np.random.rand(6), c=np.random.rand(2))

df = pd.DataFrame.from_dict(in_dict)

This fails with the following error

df = pd.DataFrame.from_dict(in_dict)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-4-2c9e8bf1abe9> in <module>
----> 1 df = pd.DataFrame.from_dict(in_dict)

~\Anaconda3\lib\site-packages\pandas\core\frame.py in from_dict(cls, data, orient, dtype, columns)
   1371             raise ValueError("only recognize index or columns for orient")
   1372
-> 1373         return cls(data, index=index, columns=columns, dtype=dtype)
   1374
   1375     def to_numpy(

~\Anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data, index, columns, dtype, copy)
    527
    528         elif isinstance(data, dict):
--> 529             mgr = init_dict(data, index, columns, dtype=dtype)
    530         elif isinstance(data, ma.MaskedArray):
    531             import numpy.ma.mrecords as mrecords

~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in init_dict(data, index, columns, dtype)
    285             arr if not is_datetime64tz_dtype(arr) else arr.copy() for arr in arrays
    286         ]
--> 287     return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
    288
    289

~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in arrays_to_mgr(arrays, arr_names, index, columns, dtype, verify_integrity)
     78         # figure out the index, if necessary
     79         if index is None:
---> 80             index = extract_index(arrays)
     81         else:
     82             index = ensure_index(index)

~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in extract_index(data)
    399             lengths = list(set(raw_lengths))
    400             if len(lengths) > 1:
--> 401                 raise ValueError("arrays must all be same length")
    402
    403             if have_dicts:

ValueError: arrays must all be same length

The solution is simple. I have faced this situation a lot, so posting this here on the blog for easy reference

use orient=’index’

df = pd.DataFrame.from_dict(in_dict, orient='index')

df.head()

          0         1         2         3         4         5
a  0.409699  0.098402  0.399315       NaN       NaN       NaN
b  0.879116  0.460574  0.971645  0.147398  0.939485  0.222164
c  0.747605  0.123114       NaN       NaN       NaN       NaN

df.T

          a         b         c
0  0.409699  0.879116  0.747605
1  0.098402  0.460574  0.123114
2  0.399315  0.971645       NaN
3       NaN  0.147398       NaN
4       NaN  0.939485       NaN
5       NaN  0.222164       NaN

Some related posts you might like

Put an Image Behind your matplotlib plots

Here’s a quick one.

Problem.

You want to add pretty graphics in the back of your data. How to do this with matplotlib?

Solution

import numpy as np
import matplotlib.pyplot as plt

# Path to the image
fpath =r"C:\Users\sukhbinder.singh\Desktop\day.jpg"

# Read the image
img = plt.imread(fpath)

# Plot the image
fig, ax = plt.subplots()
ax.imshow(img)
a, b, c,d = plt.axis('off')

# Now plot your appropriatly scalled data. We will plot some 
# random numbers
xx = np.random.randint(a,b, size=100)
yy = np.random.randint(d,c, size=100)
plt.plot(xx,yy, "r.")
plt.savefig("wall.png")
plt.show()

Simple. Here’s the result

Image as Background in a Matplotlib Plot

Some Useful pytest Command-line Options

I love pytest.

Pytest is a testing framework which allows us to write test codes using functional python and functional python is awesome.

Why use PyTest?

There are many reasons to use pytest here are some that I feel are important.

  • Very easy to start with because of its simple and easy syntax.
  • Less Boilerplate
  • Can run a specific test or a subset of tests
  • and many more useful features

Here’s a list of command-line options that can be used while using pytest.

Simple use
pytest

Too unorganised, lets’ fix this

pytest -v

Much better.

oh there’s a failure but there is too much information on the failure, let’s fix that with

pytest -v –tb=line

This is good, but just a line of info is too little. Ok lets try this.

pytest -v –tb=short

Thats good.

What if I want to run a specific test. No problem just use “-k” option

pytest -v -k “SOMENAME”

Thats cool. What if I want to just run the last failed test or tests. Simple use “–lf

pytest -v –lf

And if you want to debug the failed tests, well use “–pdb

pytest -v –pdb

On failure, it will bring the debugger.

Well, that’s it for this post. Hope this helps.

More posts like this that you might want to explore.

Python Logger Printing Multiple Times

This is my standard boilerplate code for adding any logging functionality in an app. This makes starting and working on projects super easy. But sometimes if the project involves multiples modules, there is an annoying little thing that the log is printed multiple times on the console.

def create_logger(name="imap", path=os.getcwd()):
    fname = os.path.join(path, "{}.log".format(name))
    logger = logging.getLogger(name)
    logger.setLevel(logging.DEBUG)
    # Create handlers
    c_handler = logging.StreamHandler()
    f_handler = logging.FileHandler(fname, mode="w")
    c_handler.setLevel(logging.INFO)
    f_handler.setLevel(logging.DEBUG)
    # Create formatters and add it to handlers
    c_format = logging.Formatter("%(levelname)s %(message)s")
    f_format = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
    c_handler.setFormatter(c_format)
    f_handler.setFormatter(f_format)
    # Add handlers to the logger
    logger.addHandler(c_handler)
    logger.addHandler(f_handler)

    return logger


def get_logger(name: str):
    logger = logging.getLogger(name)
    return logger  

In one particular project, which was using multiple modules, this setup was causing the logging messages to print multiple times. This duplicate output in a simple python logging configuration was not harmful but was annoying.

After a few googles searches, false re-starts and reading multiple debates on stack overflow found the solution that was as simple as this.

Solution

logger.propagate = False

Full code that works without the flaw is shown below

def create_logger(name="imap", path=os.getcwd()):
    fname = os.path.join(path, "{}.log".format(name))
    logger = logging.getLogger(name)
    logger.setLevel(logging.DEBUG)
    # Create handlers
    c_handler = logging.StreamHandler()
    f_handler = logging.FileHandler(fname, mode="w")
    c_handler.setLevel(logging.INFO)
    f_handler.setLevel(logging.DEBUG)
    # Create formatters and add it to handlers
    c_format = logging.Formatter("%(levelname)s %(message)s")
    f_format = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
    c_handler.setFormatter(c_format)
    f_handler.setFormatter(f_format)
    # Add handlers to the logger
    logger.addHandler(c_handler)
    logger.addHandler(f_handler)
    logger.propagate = False

    return logger


def get_logger(name: str):
    logger = logging.getLogger(name)
    return logger  



Other related posts that you may like…

Standard setup.py

Yes yes I know we should use poetry and other packaging mechanisms, but for simple small projects at home or for a simple application the setup.py is a good place to begin

Here’s a sample setup.py that I have used in many of my personal projects

Standard Setup.py

import pathlib
from setuptools import find_packages, setup


# The directory containing this file
HERE = pathlib.Path(__file__).parent

# The text of the README file
README = (HERE / "README.md").read_text()

setup(
    name="winsay",
    version="1.1",
    packages=find_packages(),
    license="Private",
    description="say in windows",
    long_description=README,
    long_description_content_type="text/markdown",
    author="sukhbinder",
    author_email="sukh2010@yahoo.com",
    url = 'https://github.com/sukhbinder/winsay',
    keywords = ["say", "windows", "mac", "computer", "speak",],
    entry_points={
        'console_scripts': ['say = winsay.winsay:main', ],
    },
    install_requires=["pywin32"],
    classifiers=[
        "License :: OSI Approved :: MIT License",
        "Programming Language :: Python :: 3",
        "Programming Language :: Python :: 3.7",
    ],

)

This is useful and is always a good starting point for my project files.

JSON to Named Tuple

You have a JSON file and you are tired of getting the JSON just as a plain vanilla dictionary, then the following code using the namedtuple available in the collections module in standard python can come to your rescue

Here’s an example

from collections import namedtuple
import json
fname =r"D:\pool\JobFolder\INLT2916\1\run_1\sample_1.json"
with open(fname, "r") as fin:
    data = json.load(fin)
def convert(dictionary):
    for key, value in dictionary.items():
            if isinstance(value, dict):
                dictionary[key] = convert(value) 
    return namedtuple('GenericDict', dictionary.keys())(**dictionary)
objdata = convert(data)

Observe the convert definition. Now one can access its elements like

objdata.tasks.NXUpdate.start_date
objdata.metadata.running_tasks

Hope this helps someone.

Similar posts