Determining screen locked of a system using python’s standard library

I think, apart from the ease of use, python’s batteries include philosopy is one of the reason its has become so popular.

Here’s another cool functionality that we needed in one of our app that was trying to maximise the usage of computing resources when the user has locked his computer.

The problem,

Get to know if the screen is locked

def screen_locked():
    """
    Find if the user has locked their screen.
    """
    user32 = ctypes.windll.User32
    OpenDesktop = user32.OpenDesktopA
    SwitchDesktop = user32.SwitchDesktop
    DESKTOP_SWITCHDESKTOP = 0x0100

    hDesktop = OpenDesktop("default", 0, False, DESKTOP_SWITCHDESKTOP)
    result = SwitchDesktop(hDesktop)
    if result:
        return False
    else:
        return True




File and Folder Comparison with Python

Python standard library modules are incedible. There’s a small gem to compare files and directories.

Its useful

Say you have two ascii files and you want to do a file comparision, don’t worry, use python.

import filecmp

# Check two files
assert filecmp.cmp(base_reduced_bdd, bdd_file, shallow=False) is True

To compare two directories, use

x = filecmp.dircmp(dir1, dir2)

# prints a report on the differences between dir1 and dir2
x.report() 

filecmp module has utilities for comparing files and directories.

It consists of the following.

Classes:
    dircmp

Functions:
    cmp(f1, f2, shallow=True) -> int
    cmpfiles(a, b, common) -> ([], [], [])
    clear_cache()



Signature: filecmp.cmp(f1, f2, shallow=True)
Docstring:
Compare two files.

Arguments:

f1 -- First file name

f2 -- Second file name

shallow -- Just check stat signature (do not read the files).
           defaults to True.

Return value:

True if the files are the same, False otherwise.

This function uses a cache for past comparisons and the results,
with cache entries invalidated if their stat information


Refer docs for more usage.

Get All Info About a Python Environment

Conda makes creating environments easy and if you are anything like me, over the course of time you end up having many enviroements and it becomes difficult to knwo what is what.

Basic hygine is to make environemnet names unique and descriptive. But even then knowing what an environment has becomes difficult.

Here’s a general script that I use to get all python related information inside an environemnt

Get all python info:

import sys
import os
import pkg_resources
from pprint import pprint


pprint({
    'sys.version_info': sys.version_info,
    'sys.prefix': sys.prefix,
    'sys.path': sys.path,
    'pkg_resources.working_set': list(pkg_resources.working_set),
    'PATH': os.environ['PATH'].split(os.pathsep),
})

Simple and it works.

Starter Argparse and Setup.py templates

I tend to develop my python scripts as small apps, this way I can call and use them from every where in command line without relying on the calling the python scripts

Here’s a template of the setup.py and argparse template that I always start with.

Setup.py example is for making the application as a package and installable.

Install_requires is a list of packages required by the app.
Entry_points is optional if given, app can be called using the ‘app_name’ from command line.

import os
from setuptools import find_packages, setup
 
setup(
    name= 'name',
    version="3.0",
    packages=find_packages(),
    include_package_data=True,
    zip_safe=False,
    license="Private",
    description= "This is the description",
    author= "author_name",
    author_email= "author_contact_email",
 
    install_requires=["psutil", "pywin32"],
 
    entry_points={
        'console_scripts': ['app_name = app:main']
    }
)
 

Argparse example template

Following a starter template for using argparse.

def main():
    parser = argparse.ArgumentParser(
        description="Pro Thermals: Uses Iges from NX Update to perform thermal analysis on scenery Model")
    parser.add_argument("indir", type=str, help="Input dir ")
    parser.add_argument("outdir", type=str, help="Output dir ")
    parser.add_argument("-s", "--scpath", type=str, help="simulation executable path ",
                        default="W:\\simulation_app\\MSWindows\\bin\\x64")
    parser.add_argument("-o", "--omp-threads", type=int, help="Specifies how many OMP_NUM_THREADS app uses while running ",
                        default=1)
 
    args = parser.parse_args()
 
    input_dir = args.indir or os.getcwd()
    output_dir = args.outdir
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)
 
    run_ok = db_thermals(input_dir,
                         output_dir,
                         args.scpath, args.omp_threads)
 
if __name__ == "__main__":
    main()

In the above example, indir and outdir are required positional arguments, while scpath and omp-threads are optional arguments with some defaults.

.

Few common things to look out for while converting python 2 to python 3

If you are converting your python2 code to python 3 manually.

apart from the print statement, here are few things that i encountered.

  1. xrange is not available in python 3, simply used range

2. dict_keys in python 2 was a list and were iterable, but in python 3 you get the following error

this_blade_face = str(initial_node_nums.keys()[-1])

TypeError: 'dict_keys' object does not support indexing

If you need to use the keys, then use list(dict.key())

this_blade_face = str(list(initial_node_nums.keys())[-1])

3. Iterations

for loop in range(start, stop + 1):
TypeError: 'float' object cannot be interpreted as an integer

if start and stop are float, they will not work in python 3

so use integers in your range loop.

4. Map in python 2 returned a list but in python 3 it returns a map object.

in python 3 this code spits teh following error.

times = map(float, timeInput)
ramp1T1 = ((times[count] - times[count-1]) - timeoffset)/5

TypeError: 'map' object is not subscriptable

solution:

times = list(map(float, timeInput))

Well thats it for this post, but do check out the 2to3 tool available in standard python, which will do most of this convertion automatically for you.

Split text after nth occurrence of character

python never fails to amaze me. You have to keep using and and you find these little gems hidden in it.

Suppose you have a string like this

t = "3,5,2019,9.99, Argos,facial sauna ,(10 Argos gift voucher, card)(19.99)"

you want to split the numbers and the tags, meaning at the fourth occurance of comma, how will you do it?

simple.

t.split(",",4)

this returns a list like this

['3',
'5',
'2019',
'9.99',
' Argos,facial sauna ,(10 Argos gift voucher, card)(19.99)']

docstring of the split

S.split(sep=None, maxsplit=-1) -> list of strings

Return a list of the words in S, using sep as the
delimiter string. If maxsplit is given, at most maxsplit
splits are done. If sep is not specified or is None, any
whitespace string is a separator and empty strings are
removed from the result.

What was surprising for me was thay i have used split for so long and yet haven’t seen this untill last year.

Get Outlook Entries With Python 3

This was an old post, and was written in python 2, but that refused to work in python3 as pointed by win

Today found some time to look at this and fixed the code. So here’s new improved code that works both in python 2 and python3. The new code gives user the ability to change the date time format as suggested in the first comment.

import win32com.client
import datetime
from collections import namedtuple


event = namedtuple("event", "Start Subject Duration")


def get_date(datestr):
    try:  # py3
        adate = datetime.datetime.fromtimestamp(datestr.Start.timestamp())
    except Exception:
        adate = datetime.datetime.fromtimestamp(int(datestr.Start))
    return adate


def getCalendarEntries(days=1, dateformat="%d/%m/%Y"):
    """
    Returns calender entries for days default is 1
    Returns list of events
    """
    Outlook = win32com.client.Dispatch("Outlook.Application")
    ns = Outlook.GetNamespace("MAPI")
    appointments = ns.GetDefaultFolder(9).Items
    appointments.Sort("[Start]")
    appointments.IncludeRecurrences = "True"
    today = datetime.datetime.today()
    begin = today.date().strftime(dateformat)
    tomorrow = datetime.timedelta(days=days) + today
    end = tomorrow.date().strftime(dateformat)
    appointments = appointments.Restrict(
        "[Start] >= '" + begin + "' AND [END] <= '" + end + "'")
    events = []
    for a in appointments:
        adate = get_date(a)
        events.append(event(adate, a.Subject, a.Duration))
    return events


if __name__ == "__main__":
    events = getCalendarEntries()

Sample result

[event(Start=datetime.datetime(2020, 4, 7, 8, 0), Subject='Quick Project Review (30 mins to save future work)', Duration=30),
 event(Start=datetime.datetime(2020, 4, 7, 9, 0), Subject='Billing detail', Duration=15),
 event(Start=datetime.datetime(2020, 4, 7, 9, 0), Subject='DF DW', Duration=60),
 event(Start=datetime.datetime(2020, 4, 7, 10, 0), Subject='hw', Duration=1),
 event(Start=datetime.datetime(2020, 4, 7, 10, 50), Subject='Canceled: Daily Standups are back and they are better than ever..!', Duration=10),
 event(Start=datetime.datetime(2020, 4, 7, 11, 0), Subject='Canceled: Sprint Planning / Refinement (Alternating Weeks)', Duration=120),
 event(Start=datetime.datetime(2020, 4, 7, 12, 0), Subject='Daily Cafe. / FIKA', Duration=30),
 event(Start=datetime.datetime(2020, 4, 7, 12, 0), Subject='CABI COP Weekly Meeting', Duration=30),
 event(Start=datetime.datetime(2020, 4, 7, 12, 0), Subject='Design System Engagement', Duration=30),
 event(Start=datetime.datetime(2020, 4, 7, 16, 0), Subject='rasise invoices', Duration=90),
 event(Start=datetime.datetime(2020, 4, 7, 16, 30), Subject='shutdown', Duration=15)]

Memory Profile Your code with Ipython

Lets say you have a function that you want check the memory usage in python. This can be evaluated with another IPython extension, the memory_profiler.

The memory profiler extension contains two useful magic functions: the %memit magic and the %mprun function.

%memit magic gives the peak and total memory used by a function, while %mprun provides a line by line usage of memory.

file: temp_interp.py

import numpy as np
from scipy.interpolate import interp1d
def test(n):
	a = np.random.rand(n,4000,30)
	x = np.arange(n)
	xx = np.linspace(0,n, 2*n)
	f= interp1d(x,a, axis=0, copy=False, fill_value="extrapolate", assume_sorted=True)
	b = f(xx)

To test this function with %mprun

from test_interp import test
%mprun -f test test(1000)

This shows a line-by-line description of memory use.

Before using, we need to load the extension:

%load_ext memory_profiler

To install the extension use the following

pip install memory_profiler

Get Activation from scikit-learn’s neural network model

I have a simple multilayer perceptron that I use on my work computer. It works well with the and has 95% accuracy on top 5 basis.

It’s a delight to see it work, but I want to get more insights on what is happening inside it, one way to make it work is to see how and what neurons are firing.

Unlike keras, sklearn doesn’t give back activations for each layer on by itself, but there is a way to get the activations,

Following is code that helps get the activation from a sklearn neural network model

def get_activations(clf, X):
        hidden_layer_sizes = clf.hidden_layer_sizes
        if not hasattr(hidden_layer_sizes, "__iter__"):
            hidden_layer_sizes = [hidden_layer_sizes]
        hidden_layer_sizes = list(hidden_layer_sizes)
        layer_units = [X.shape[1]] + hidden_layer_sizes + \
            [clf.n_outputs_]
        activations = [X]
        for i in range(clf.n_layers_ - 1):
            activations.append(np.empty((X.shape[0],
                                         layer_units[i + 1])))
        clf._forward_pass(activations) 
return activations

via stackoverflow

Shutil make_archive to Rescue

Shutil to rescue

I am amazed at the versatility of the shutil library, one usage that i discovered recently was its ability to create archives.

Previously I was always using zipfile module from python but make_archive function of shutil is such an intuitive function to use.

With a single line you can take backup of a folder.

Example

shutil.make_archive(output_filename, 'zip', dir_name)

shutil.make_archive(base_name, format, root_dir=None, base_dir=None,
verbose=0, dry_run=0, owner=None, group=None, logger=None)

Create an archive file (eg. zip or tar).

For more info check docs.

Shutil to Rescue

As part of my work, need to run simulations which are driven by python’s sub process module is the work horse for this. But there’s a problem.

On Windows,the subprocess module doesn’t look in the PATH unless you pass shell=True. However, shell=True can be a security risk if you’re passing arguments that may come from outside your program.

To make subprocess nonetheless able to find the correct executable, we can use shutil.which.

Suppose the executable in your PATH is named data_loader:

subprocess.call([shutil.which('data_loader'), arg1, arg2])
shutil.which(cmd, mode=1, path=None)

Given a command, mode, and a PATH string, return the path which
conforms to the given mode on the PATH, or None if there is no such
file.


Increasingly I am loving the shutil library it’s so versatile..

Sound Alarm When Program Execution Completes in Python

Often one faces a situation where your code takes extremely long to run and you don’t want to be staring at it all the time but want to know when it is done.

In engineering analysis, simulation take a long time to run and the python driver program take a long time to finish and I face this problem a lot.

A simple solution is adding a beep at the end, here’s how to do it in python.

def beep():
    print "\a"

beep()

works on windows/ linux/mac without any modification.

via here

Shelve it with python

One of the little gems hidden in python standard library is shelve.

The shelve module can be used as a simple persistent storage option for Python objects when a relational database is overkill. The shelf is accessed by keys, just as with a dictionary. The values are pickled and written to a database created and managed by dbm.

import shelve

with shelve.open('test_shelf.db') as s:
    s['key1'] = {
        'int': 310,
        'float': 3.14.5,
        'string': 'Sample string data',
	'array': [[1,2,3],[4,5,6]],
    }

I mostly work with large simulation data and run simulation from python and these simulations take time to run sometimes days, so a simple persistent storage option provided by shelve is an intuitive way to restore my work.

An advantage of is we do not have to remember the order in which the objects are pickled, since shelve gives a dictionary-like object.

Here’s a sample code I use to store my long running results and latter to restore those values at a later time.

filename=r"flake_results.out"

my_shelf = shelve.open(filename, "n")

for key in ["stress", "strain", "plas", "creep","temp"]:
	try:	
		my_shelf[key]=globals()[key]
	except Exception:
		print ("Error shelving: {}".format(key))

my_shelf.close()

To restore

my_shelf = shelve.open(filename)
for key in my_shelf:
    globals()[key]=my_shelf[key]

my_shelf.close()

for more info: Visit This

Participate in the 2018 Python Developer Survey

If you use python in your work or hobby projects, please consider participating in the 2018 python developer survey.

Reposting a PSF-Community email as a PSA

Excerpt from an email to the psf-community@python.org and psf-members-announce@python.org mailing lists:

As some of you may have seen, the 2018 Python Developer Survey is available. If you haven’t taken the survey yet, please do so soon! Additionally, we’d appreciate any assistance you all can provide with sharing the survey with your local Python groups, schools, work colleagues, etc. We will keep the survey open through October 26th, 2018.

Python Developers Survey 2018

We’re counting on your help to better understand how different Python developers use Python and related frameworks, tools, and technologies. We also hope you’ll enjoy going through the questions.

The survey is organized in partnership between the Python Software Foundation and JetBrains. Together we will publish the aggregated results. We will randomly choose and announce 100 winners to receive a Python Surprise Gift Pack (must complete the full survey to qualify).