skip to main | skip to sidebar

Python Programs and Examples

Pages

  • Home
 
  • RSS
  • Facebook
  • Twitter
Friday, October 26, 2012

Find text/pattern under parent folder from matched file names

Posted by Raju Gupta at 10:38 AM – 1 comments
 

The small piece of code take the folder name and the pattern of the possible file names which might have the text then it takes again the pattern/actual text which one is being searched.
It prints the file name and line searching out the folder.

import os
import re
#import sys
#shutil


def matchline(line,match):
   #cmatch=re.compile(match)
   cmatch=match
   checkmatch=cmatch.search(line)
   if checkmatch!=None:
      print "-----------------------------------------"
      print "Got the line...."+line
      return 0
   else:
      return -1

def readfilestring(filename, match):
   linestring = open(filename, 'r')
   l=linestring.readline()
   while l != '':
      #print l
      if matchline(l,match) ==0:
         print "<----in file" + filename
         print "-----------------------------------------"
      l=linestring.readline()
      

def search_file(search_path,filename,stringpatt):
   try:
      dirList = os.listdir(search_path)
   except Exception, e:
      print "Search error for OS issue - "
      print e
   else:
      for d in dirList:
         if os.path.isdir(search_path+os.sep+d) == True:
            search_file(search_path+os.sep+d,filename,stringpatt)
         elif os.path.isfile(search_path+os.sep+d) == True:
            #print d + " is a file update whatever required"
            pmatch=filename.search(d)
            if pmatch==None:
               ss=1
            else:
               #print "-----------------------------------------"
               #print "The file is found - "+search_path+os.sep+d
               #print "-----------------------------------------"
               readfilestring(search_path+os.sep+d, stringpatt)
         else:
            print "Unknown filesystem object - "+search_path+os.sep+d
   
if __name__ == '__main__':
   search_path = raw_input("Enter search_path: ")
   filepattern = raw_input("Enter filepattern: ")
   stringpattern= raw_input("Enter stringpattern: ")
   try:
      retest="filepattern"
      ire=re.compile(filepattern,re.I)
      retest="stringpattern"
      strre=re.compile(stringpattern,re.I)
   except Exception, e:
      print "Reg Ex problem - for "+retest
      print e
   else:
      if os.path.exists(search_path):
         find_file = search_file(search_path,ire,strre)
      else:
         print "Not a valid path - "+search_path


[ Read More ]
Read more...

XML to CSV Converter

Posted by Raju Gupta at 10:35 AM – 2 comments
 
Python Program to extract each execution data (few children in xml) and create a csv file.

 def xml2csv(inputFile,outputFile,elemName,ns,columnsToExtract):
   outFile=open(outputFile,'w')
   writer=csv.DictWriter(outFile,columnsToExtract,extrasaction='ignore',restval='')

   # Write Header to CSV file
   writer.writerow(columnsToExtract)

   #Sree -Used iterparse, so that only partial xml is in memory and after usage every element is cleared out of memory.
   for  event,rec in etree.iterparse(inputFile, tag="%s%s" %(ns,elemName)):
      row=dict()
      for child in rec:
         row[child.tag[len(ns):]]=child.text.strip()
      rec.clear()
      writer.writerow(row)
      outFile.close()


[ Read More ]
Read more...
Thursday, October 25, 2012

XSL to CSV converter

Posted by Raju Gupta at 10:27 AM – 0 comments
 
Leaverage python csv writer and xlrd reader to do this job.

def xls2csv(inputFile,outputFile):
   print "Input File : %s" %(inputFile)
   print "Output File : %s" %(outputFile)
   writer=csv.writer(open(outputFile,'w'),delimiter=',',quotechar='"')
   book=xlrd.open_workbook(inputFile)
   sheet=book.sheet_by_index(0)
   for row in range(sheet.nrows):
      writer.writerow( sheet.row_values(row))

[ Read More ]
Read more...

Spell Corrector in Python

Posted by Raju Gupta at 10:22 AM – 0 comments
 

import re, collections

def words(text): return re.findall('[a-z]+', text.lower()) 

def train(features):   
 model = collections.defaultdict(lambda: 1)    
 for f in features:        
  model[f] += 1    
 return model

NWORDS = train(words(file('big.txt').read()))
alphabet = 'abcdefghijklmnopqrstuvwxyz'

def edits1(word):   
 splits     = [(word[:i], word[i:]) for i in range(len(word) + 1)]   
 deletes    = [a + b[1:] for a, b in splits if b]   
 transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1]   
 replaces   = [a + c + b[1:] for a, b in splits for c in alphabet if b]   
 inserts    = [a + c + b     for a, b in splits for c in alphabet]   
 return set(deletes + transposes + replaces + inserts)

def known_edits2(word):    
 return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS)

def known(words): return set(w for w in words if w in NWORDS)

def correct(word):    
 candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word]    
 return max(candidates, key=NWORDS.get)


[ Read More ]
Read more...
Wednesday, October 24, 2012

Format a plain ASCII file using Python script

Posted by Raju Gupta at 5:03 AM – 0 comments
 
Format a plain ASCII file, adding page breaks and line feeds. This is handy for sending output to my dot-matrix printer; under Linux, just do pyprint filename >/dev/lp1 (or to whatever your printer device is).

#!/bin/env python

# Paginate a text file, adding a header and footer 

import sys, time, string

# If no arguments were given, print a helpful message
if len(sys.argv)!=2:
    print 'Usage: pyprint filename'
    sys.exit(0)

class PrinterFormatter:
    def __init__(self, filename, page_len=58): 
 # Save the time of creation for inclusion in the header
 import time
 self.now=time.asctime(time.localtime(time.time()))

 # Save the filename and page length
 self.filename=filename ; self.page_len = page_len
 
 # Zero all the counters
 self.page=0 ; self.count=0 ; self.header_written=0

    def write_header(self):
 # If the header for this page has just been written, don't
 # write another one.
 if self.header_written: return

 # Increment the page count, and reset the line count
 self.header_written=1 ; self.count=1 ; self.page=self.page+1

 # Write the header
 header=self.filename
 p=str(self.page) ; header=string.ljust(header, 38-len(p))+'['+p+']'
 header=string.ljust(header, 79-len(self.now))+self.now
 sys.stdout.write(header+'\n\n')

    def writeline(self, L):
 # If the line is exactly 80 lines long, chop off any trailing
 # newline, since the printhead will wrap around
        length=len(L)
 if (length % 80) == 0 and L and L[-1]=='\n': L=L[:-1]

 # If we've printed a pageful of lines, output a linefeed and
 # output the header.
 if self.count>self.page_len:
     sys.stdout.write('')
     self.write_header()

 # Print the actual line of text
 sys.stdout.write(L) 
 self.count=self.count+1 ; self.header_written=0

# Open the input file, and create a PrinterFormatter object, passing
# it the filename to put in the page header.
    
f=open(sys.argv[1], 'r')
o=PrinterFormatter(sys.argv[1])
o.write_header()   # Print the header on the first page

# Iterate over all the lines in the file; the writeline() method will
# output them and automatically add page breaks and headers where
# required.
while (1):
    L=f.readline()
    if L=="": break
    o.writeline(L)

# Write a final page break and close the input file.
sys.stdout.write('')
f.close()


[ Read More ]
Read more...

Control Structures in Python

Posted by Raju Gupta at 5:00 AM – 7 comments
 
This Program explains different control structures in python

If
====
# basic "If"
if boolean:
    x = 1
else:
    x = 0

# Not so basic "If"
# Notice the new keywords!
if (boolean and otherboolean) or ( not boolean ):  #oops, this is just if(boolean), oh well...  :~)
    x = 1
elif not otherboolean:
    x = 0
else:
    x = 100000000000000000000 # Big numbers make if statements less boring!

While
=====

# This is it...  
while i < 100:
    i + 1

# More generally:
while condition == true:
    """do the following"""
    ...
    ...
    condition = false.

# Infinite Loop:
while 1:
    print "I'm loopy!"


FOR
====

# Basic For Loop
for item in list:
    dosomething(item)

# The traditional C for loop has a different form in Python:
for x in range(0, 100):
    dosomething(x)

# More interesting example:
def buglove( UIUC ):
    for bug in UIUC:
        if type( bug ) == type( chinese_lady_beetle() ):
            crush( bug )
        else:
            hug( bug )






[ Read More ]
Read more...
Tuesday, October 23, 2012

Python script to zip and unzip files

Posted by Raju Gupta at 9:30 PM – 5 comments
 

# Simple Application/Script to Compress a File or Directory
# Essentially you could use this instead of Winzip

"""
Path can be a file or directory
Archname is the name of the to be created archive
"""
from zipfile import ZipFile, ZIP_DEFLATED
import os  # File stuff
import sys # Command line parsing
def zippy(path, archive):
    paths = os.listdir(path)
    for p in paths:
        p = os.path.join(path, p) # Make the path relative
        if os.path.isdir(p): # Recursive case
            zippy(p, archive)
        else:
            archive.write(p) # Write the file to the zipfile
    return

def zipit(path, archname):
    # Create a ZipFile Object primed to write
    archive = ZipFile(archname, "w", ZIP_DEFLATED) # "a" to append, "r" to read
    # Recurse or not, depending on what path is
    if os.path.isdir(path):
        zippy(path, archive)
    else:
        archive.write(path)
    archive.close()
    return "Compression of \""+path+"\" was successful!"

instructions = "zipit.py:  Simple zipfile creation script." + \
               "recursively zips files in a directory into" + \
               "a single archive." +\
               "e.g.:  python zipit.py myfiles myfiles.zip"

# Notice the __name__=="__main__"
# this is used to control what Python does when it is called from the
# command line.  I'm sure you've seen this in some of my other examples.
if __name__=="__main__":
    if len(sys.argv) >= 3:
        result = zipit(sys.argv[1], sys.argv[2])
        print result
    else:
        print instructions


================================================================================


# Simple script to Unzip archives created by
# our Zip Scripts.

import sys
import os
from zipfile import ZipFile, ZIP_DEFLATED

def unzip( path ):
    # Create a ZipFile Object Instance
    archive = ZipFile(path, "r", ZIP_DEFLATED)
    names = archive.namelist()
    for name in names:
        if not os.path.exists(os.path.dirname(name)):
            # Create that directory
            os.mkdir(os.path.dirname(name))
        # Write files to disk
        temp = open(name, "wb") # create the file
        data = archive.read(name) #read the binary data
        temp.write(data)
        temp.close()
    archive.close()
    return "\""+path+"\" was unzipped successfully."
    
instructions = "This script unzips plain jane zipfiles:"+\
               "e.g.:  python unzipit.py myfiles.zip"

if __name__=="__main__":
    if len(sys.argv) == 2:
        msg = unzip(sys.argv[1])
        print msg
    else:
        print instructions


[ Read More ]
Read more...

Fast and efficient Backup script that works on Windows and Linux

Posted by Raju Gupta at 4:55 AM – 7 comments
 
This python script backup your directory structure that works on Linux and Windows both. It directly uses the shell to do this task and shell commands are considered very fast compared to file operations does otherwise.

import os, os.path
import subprocess
import time
def backUpDir(path):
    """
    Creates backup of the passed old dir and creates a new dir. The backed up dir gets the
    date and time as the name of the new backed up file.
    On success, returns a list consising of two values:
        0: to signify the success
        None: means no error occurred.

    On error, return a list consisting of two values:
        -1 : to signify the failure
        error string: the exact error string
    """

    if os.path.exists(path) == True:
        #dir exists then backup old dir and create new
        backupDir = path + time.strftime('-%Y-%m-%d-%Hh%Mm%Ss')
        
        if os.name == "nt":
            #NT Sysyem  - used the DOS Command 'move' to rename the folder
            cmd = subprocess.Popen(["mov", path, backupDir], \
                                    shell = True, \
                                    stdout = subprocess.PIPE, \
                                    stdin = subprocess.PIPE, \
                                    stderr = subprocess.PIPE)
        elif os.name == "posix":
            #POSIX System - use the appropriate POSIX command to rename the folder.
            cmd=subprocess.Popen(["mv", path, backupDir], \
                                    shell = True, \
                                    stdout = subprocess.PIPE, \
                                    stdin = subprocess.PIPE, \
                                    stderr = subprocess.PIPE)
            pass
        else:
            # Not supported on other platforms
            return [-1, "Not supported on %s platform" %(os.name)]
        (out, err) = cmd.communicate()
        if len(err) != 0:
            return [-1, err]
        else:
            os.mkdir(path)
            return [0, None]
    else:
        #create new dir
        os.mkdir(path)
        return [0, None]

[ Read More ]
Read more...
Monday, October 22, 2012

File Splitter Script

Posted by Raju Gupta at 4:53 AM – 0 comments
 
The script would take in the input file with multiple rows and would give a row count. This can then be used to decide on the number of segmented files that would need to be generated based on the required number of rows in each file. Please note that I shall be making a GUI version of the same soon for simplifying use.

#!/usr/bin/env python
# encoding: utf-8

import sys
from math import *
import os
import random
from Tkinter import *
import tkFileDialog
 

def linesplit():
        line=0
        k1=tkFileDialog.askopenfilename()
        ff=open(k1)
        for lineee in ff:
                line=line+1
        print "total number of lines is:"+str(line)
        ff.close()
        line=0
        user_ask=raw_input("Specify the number of records in each file = ")
        user_ask=eval(user_ask)
        print user_ask
        ff=open(k1)
        i=1
        temp=user_ask
        for lineee in ff:
                line=line+1
                temp_filename="c:\Temp\split_"+str(i)+".txt"
                if(line < user_ask):
                        ff1=open(temp_filename,'a+')
                        ff1.write(lineee)
                        ff1.close()
                if(line == user_ask):
                        user_ask=user_ask+temp
                        i=i+1
                        
        ff.close()
        flag=1
        return (line,flag)


k=linesplit()
flag=k[1]


[ Read More ]
Read more...

Python script for walking the directory tree structure but excluding some directories and files

Posted by Raju Gupta at 4:49 AM – 4 comments
 
The script walks the directory structure recursively. Some of the directories and files can be excluded from the walk by providing them in an exclude_list


def walkExclusive(top, topdown=True, onerror=None, exclude_list=[]):
"""Directory tree generator.

For each directory in the directory tree rooted at top (including top
itself, but excluding '.' and '..'), yields a 3-tuple

dirpath, dirnames, filenames

dirpath is a string, the path to the directory. dirnames is a list of
the names of the subdirectories in dirpath (excluding '.' and '..').
filenames is a list of the names of the non-directory files in dirpath.
Note that the names in the lists are just names, with no path components.
To get a full path (which begins with top) to a file or directory in
dirpath, do os.path.join(dirpath, name).

If optional arg 'topdown' is true or not specified, the triple for a
directory is generated before the triples for any of its subdirectories
(directories are generated top down). If topdown is false, the triple
for a directory is generated after the triples for all of its
subdirectories (directories are generated bottom up).

When topdown is true, the caller can modify the dirnames list in-place
(e.g., via del or slice assignment), and walk will only recurse into the
subdirectories whose names remain in dirnames; this can be used to prune
the search, or to impose a specific order of visiting. Modifying
dirnames when topdown is false is ineffective, since the directories in
dirnames have already been generated by the time dirnames itself is
generated.

exclude_list is a list containing items which are not to be walked in the
directory structure. e.g. exclude_list = ['.svn', '.project']

By default errors from the os.listdir() call are ignored. If
optional arg 'onerror' is specified, it should be a function; it
will be called with one argument, an os.error instance. It can
report the error to continue with the walk, or raise the exception
to abort the walk. Note that the filename is available as the
filename attribute of the exception object.

Caution: if you pass a relative pathname for top, don't change the
current working directory between resumptions of walk. walk never
changes the current directory, and assumes that the client doesn't
either.

Example:



 from os.path import join, getsize
    for root, dirs, files in walk('python/Lib/email'):
        print root, "consumes",
        print sum([getsize(join(root, name)) for name in files]),
        print "bytes in", len(files), "non-directory files"
        if 'CVS' in dirs:
            dirs.remove('CVS')  # don't visit CVS directories
    """

    from os.path import join, isdir, islink
    from os import error, listdir
    
    # We may not have read permission for top, in which case we can't
    # get a list of the files the directory contains.  os.path.walk
    # always suppressed the exception then, rather than blow up for a
    # minor reason when (say) a thousand readable directories are still
    # left to visit.  That logic is copied here.
    try:
        # Note that listdir and error are globals in this module due
        # to earlier import-*.
        names = listdir(top)
    except error, err:
        if onerror is not None:
            onerror(err)
        return
    
    if exclude_list != []:
        from copy import deepcopy
        temp_name = deepcopy(names)
        names = [item for item in temp_name if item not in exclude_list]
        
    dirs, nondirs = [], []
    for name in names:
        if isdir(join(top, name)):
            dirs.append(name)
        else:
            nondirs.append(name)

    if topdown:
        yield top, dirs, nondirs
    for name in dirs:
        path = join(top, name)
        if not islink(path):
            for x in walkExclusive(path, topdown, onerror, exclude_list):
                yield x
    if not topdown:
        yield top, dirs, nondirs


[ Read More ]
Read more...
Sunday, October 21, 2012

Python XML Parser

Posted by Raju Gupta at 5:30 PM – 0 comments
 

In our project we had kept the operations rules as an XML and the below mentioned parser was used to get the conditions to be applied for each operation.

The xml.dom.minidom was used to parse the xml file and return a document object .The document object was used to obtain the child nodes and for each child nodes the corresponding fileds and attributes were retrieved using the get attribute method and appended as a tuple.
The tuple was then included in the dictionary with the nodename as the key.



 import xml.dom.minidom

class XMLParser:
    def __init__(self, filePath):
        self.xml_file_path = filePath

    def get_a_document(self):
        return xml.dom.minidom.parse(self.xml_file_path)

    def process_xml_return_dict(self):
        doc = self.get_a_document()
        fieldMapping = doc.childNodes[0]
        operations = {}
        for layer in fieldMapping.childNodes:
            if layer.nodeType == xml.dom.minidom.Node.TEXT_NODE or layer.nodeType == xml.dom.minidom.Node.COMMENT_NODE:
                continue
            fieldList = self.get_fields(layer)
            attrList = self.get_attributes(layer)
            operations[layer.nodeName] = (attrList, fieldList)
        return operations

    def get_attributes(self, node):
        attrList = []
        nodeMap = node.attributes
        for index in range(nodeMap.length):
            attrName = nodeMap.item(index).name
            attrValue = node.getAttribute(attrName)
            attrList.append((attrName, attrValue))
        return attrList

    def get_fields(self, node):
        fieldList = []
        for childNode in node.childNodes:
            if childNode.nodeType != xml.dom.minidom.Node.TEXT_NODE and childNode.nodeType != xml.dom.minidom.Node.COMMENT_NODE:
                fromField = childNode.getAttribute('FromField')
                toField = childNode.getAttribute('ToField')
                fieldValue = childNode.getAttribute('Value')
                condition = childNode.getAttribute('Condition')
                fieldList.append((fromField, toField, fieldValue, condition))
        return fieldList
Python pickle module & Zip file creation
 
Description
The solution gives a solution to use the pickle module and to create a Zip File.  
 
 # below is a typical Python dictionary object of roman numerals
 romanD1 = {'I':1,'II':2,'III':3,'IV':4,'V':5,'VI':6,'VII':7,'VIII':8,'IX':9,'X':10}
  
 # to save a Python object like a dictionary to a file
 # and load it back intact you have to use the pickle module
 import pickle
 print "The original dictionary:"
 print romanD1
 file = open("roman1.dat", "w")
 pickle.dump(romanD1, file)
 file.close()
 # now load the dictionay object back from the file ...
 file = open("roman1.dat", "r")
 romanD2 = pickle.load(file)
 file.close()
 print "Dictionary after pickle.dump() and pickle.load():"
 print romanD2

 # for large text files you can write and read a zipped file (PKZIP format)
 # notice that the syntax is mildly different from normal file read/write
 import zipfile
 zfilename = "English101.zip"
 zout = zipfile.ZipFile(zfilename, "w")
 zout.writestr(zfilename, str1 + str2)
 zout.close()
 # read the zipped file back in
 zin = zipfile.ZipFile(zfilename, "r")
 strz = zin.read(zfilename)
 zin.close()
 print "Testing the contents of %s:" % zfilename
 print strz

[ Read More ]
Read more...

IP Address Generation

Posted by Raju Gupta at 4:39 AM – 0 comments
 

The script asks the user for two IP addresses . One is the start of the IP range, and the second is the end of it. Next, a new IP object is created using the defined IP class called i . The final step before generating the IPs is to initialize the file the IP addresses will be written to, named ofile . Now the fun begins.

For each item returned, the results will be output to ofile. Using the IP class method succ!, an until loop calls the succ! method until i equals end_ip . Once the two values are equal, that means the ending IP address has been generated and the output file is closed.

The script relies on a custom class called IP, which has four methods: initialize, to_s, succ, and succ!. The IP class is important because, once an object is created, the IP address is stored as a class variable for easy tracking. The first method called, when i is declared, is initialize. This sets @ip to start_ip . Next, succ! is called to begin creating the range of IPs. succ! calls succ and utilizes the replace method to overwrite the contents in @ip whenever succ returns a value . The meat of the IP class is located in the method succ . If @ip ever increments to the highest IP address, the script will return 255.255.255.255. IP addresses can only go up to that value.

Next, the IP address, stored in @ip, is split apart in reverse order, using the period as a delimiter. The values are stored in an array called parts. After the IP address is properly separated, a new code block is called on the array using the each_with_index method to access two pieces of informationâ??the index being passed and the value . Within this block, the value in part is compared against 255, again to prohibit invalid IP addresses. If the value is equal to 255, then it is reset to zero . The one exception to the zero reset is if the value of i is equal to 3, since that is the first octet of the IP. If part is less than 255, the method succ! is called and the if/else statement breaks.

After each part has been run through the code block, the IP address is put back together opposite of how it was taken apart. The script puts each piece back together using the join method, with periods in between the elements, all in reverse order . As mentioned previously, the succ! method is called until the end_ip address is equal to the results of succ!. That's all there is to perfectly generating an IP address range.


class IP
     def initialize(ip)
         @ip = ip
     end

     def to_s
         @ip
     end

     def==(other)
         to_s==other.to_s
     end

     def succ
         return @ip if @ip == "255.255.255.255"
         parts = @ip.split('.').reverse
         parts.each_with_index do |part,i|
             if part.to_i < 255
                 part.succ!
                 break
             elsif part == "255"
                 part.replace("0") unless i == 3
             else
                 raise ArgumentError, "Invalid number #{part} in IP address"
             end
         end
         parts.reverse.join('.')
     end

     def succ!
         @ip.replace(succ)
     end
 end

 print "Input Starting IP Address: "
 start_ip = gets.strip 

 print "Input Ending IP Address: "
 end_ip = gets.strip

 i = IP.new(start_ip)

 ofile = File.open("ips.txt", "w")
 ofile.puts i.succ! until i == end_ip
 ofile.close


[ Read More ]
Read more...
Saturday, October 20, 2012

Orphan Checker

Posted by Raju Gupta at 9:30 PM – 0 comments
 

The script does not use any outside libraries, thus keeping its execution simple. we will start by initializing the arrays that we'll be using to keep track of our links and orphan files . Next, we look to ensure that our links.txt file exists. If not, then there isn't much point in continuing to run the script, so it exits out with a nice error message . If links.txt does exist, then we continue by opening the file and reading in all of the contents line-by-line. You can change this to a comma-separated values (CSV) file, but I prefer the readability of one link per line.

After the links have been stored in the array links, the script begins to index every file in the current working directory. The results will be stored in an array called orphans . If there are subdirectories, the script will also index those files. Presumably, you would run this in the root directory of your web server to take full advantage of this script.

Now that the script has both the links and local files indexed, it is time to start comparing the two arrays, and see what's left . I called the second array orphans because I will be deleting any entry that exists within the link array. Whatever is left will be files not included on the public-facing side of the web server.

The script ends by creating a file called orphans.txt in the script's directory and writing the results to that file . Finally, after the code block is finished, the file is closed and the script finished.


links = Array.new
 orphans = Array.new
 dir_array = [Dir.getwd]

 unless File.readable?("links.txt")
     puts "File is not readable."
     exit
 end

 File.open('links.txt', 'rb') do |lv|
     lv.each_line do |line|
         links << line.chomp
     end
 end

 begin
     p = dir_array.shift 
     Dir.chdir(p)

     Dir.foreach(p) do |filename|
         next if filename == '.' or filename == '..'
         if !File::directory?(filename)
                orphans << p + File::SEPARATOR + filename
         else
             dir_array << p + File::SEPARATOR + filename
         end
     end
 end while !dir_array.empty?

 orphans -= links

 File.open("orphans.txt", "wb") do |o|
       o.puts orphans
 end


[ Read More ]
Read more...

HTML Email using Python

Posted by Raju Gupta at 10:26 AM – 4 comments
 
We created a module called sendmail which email html content. Each of application dumps data from the datastructure into html format and then calls this module

def createhtmlmail( html, text, subject ):
   """Create a mime-message that will render HTML in popular
      MUAs, text in better ones"""
   import MimeWriter
   import mimetools
   import cStringIO

   out = cStringIO.StringIO() # output buffer for our message
   htmlin = cStringIO.StringIO(html)

   writer = MimeWriter.MimeWriter(out)
#
# set up some basic headers... we put subject here
# because smtplib.sendmail expects it to be in the
# message body
#
   
   writer.addheader("Subject", subject)
   writer.addheader("MIME-Version", "1.0")
#
# start the multipart section of the message
# multipart/alternative seems to work better
# on some MUAs than multipart/mixed
#
   writer.startmultipartbody("alternative")
   writer.flushheaders()
#
# the plain text section
#
   if text != None:
      txtin = cStringIO.StringIO(text)
      subpart = writer.nextpart()
      subpart.addheader("Content-Transfer-Encoding", "quoted-printable")
      pout = subpart.startbody("text/plain", [("charset", 'us-ascii')])
      mimetools.encode(txtin, pout, 'quoted-printable')
      txtin.close()
#
# start the html subpart of the message
#
   subpart = writer.nextpart()
   subpart.addheader("Content-Transfer-Encoding", "quoted-printable")
#
# returns us a file-ish object we can write to
#
   pout = subpart.startbody("text/html", [("charset", 'us-ascii')])
   mimetools.encode(htmlin, pout, 'quoted-printable')
   htmlin.close()
#
# Now that we're done, close our writer and
# return the message body
#
   writer.lastpart()
   msg = out.getvalue()
   out.close()
   #print msg
   return msg

def sendmail (sender,to,subject,htmlFilename):
   import smtplib
   f = open(htmlFilename, 'r')
   html = f.read()
   f.close()
   message = createhtmlmail(html, None, subject)
   server = smtplib.SMTP("mta-hub")
   server.sendmail(sender, to, message)
   server.quit()


[ Read More ]
Read more...

Python interface to load commands.

Posted by Raju Gupta at 4:26 AM – 0 comments
 

In a way to standardise commands this approach uses a config file approach , which contains the system commands . usage:

The user creates a command file which lists commands with parameters as %s. a typical command file contains the following lines for a copy command.

CP_CMD=cp %s %s
now when I call the create_command function
cmd_str=create_command("/dir1","/dir2","CP_CMD") the cmd_str contains cp /dir1 /dir2 , which can be eventually executed.

The command file can contain hundreds of commands listed one below the other as follows

CP_CMD=cp %s %s
FTP_CMD=ftp %s
DIR_CMD=dir %s


This is just a code snippet elaborating the idea. Adding features like exceptions, Fancier formatting , Error redirection etc is left to you !


""" This Function loads a command file into memory as a dictonary , With Commands as keys"""
def load_command_file(file_name):
 command_dict={}
 if file_name.isalpha():
  print("Error!!command file name should be in ascii\n");
  return command_dict
 file=open(file_name,'r')
 file_lines=file.readlines()
 for line in file_lines:        
  command_list=line.strip().split("=")
  command_dict[command_list[0].strip()]=command_list[1].strip()
 file.close()    
 return command_dict

########################################################################################################

""" This function uses the above mentioned methods to create commands based on passed args""" 
def create_command(*args):
 command_dict=load_command_file(r"C:\Documents and Settings\Raj\Desktop\command_file.txt")
#print(command_dict)
 arg_len=len(args)
 command_str=command_dict[args[arg_len-1]]
 if command_str.count('%s')!=arg_len-1:
  print("Error mismatch in args passed and args actually needed")
  sys.exit() 
 command=command_str%(args[:-1])
 return command


[ Read More ]
Read more...
Friday, October 19, 2012

Python script to Validate Tabular format Data and database restore

Posted by Raju Gupta at 4:20 AM – 0 comments
 

Our python utility does all these functionality.

Major functionality:

  • Process a batch of large cvs/text files 
  • Can iterate in sub-folders 
  • File can be filtered on criteria expressed in regular expression format 
  • Reading cvs/text files 
  • Performing sanity and basic column based validation checks like null values, duplicate values, max, min, unique values etc. 
  • It also infer data type for all columns 
  • SQL Server connection 
  • On the fly table creation and bulk insertion for any given text/cvs file 
  • Logging of all activities using a separate logger module 
  • Highly configured utility 

Main.py:

import Utility
import Logger
import FieldTypes
import SQLServerConnection

import datetime
import os
import re
import string
import pyodbc


#start time
Logger.LogMessage(str(datetime.datetime.now()))
dbConn = SQLServerConnection();
temp = Utility()

#read source directory
reader = open("srcdir.txt","rb")
srcdir = reader.readline()

iterate source directory
temp.iterateFolder(srcdir)
temp.reportProfile()
print "profiling done"

reader.close();

#end time
Logger.LogMessage(str(datetime.datetime.now()))
print datetime.datetime.now()


Utility.py

import SQLServerConnection
import Field
import csv
import os
import re
import string
from Logger import Logger
import datetime



class Utility:
    fileName = ""
    
   
    def __init__(self):
        #initialise class variable
        
        #map for fieldName -> fieldInformation
        self.data = dict()
        self.profileOutput = open('C:\\temp\\profile.csv','w')
        self.output = ['FileName','FieldName','InferredDataType','Len','TotalRows','NullValues','UniqueValues']
        self.profileOutput.write(','.join(self.output))
        self.profileOutput.write('\n')
        self.profileList = []
        
        
    def readcvsFile(self, fileName, onlyName):
        ROWSEPARATER = "\n"
        FIELDSEPARATER = "\t"
        self.data = dict()
        
        #read as a text file
        reader = open(fileName,"rb")
        headers = reader.readline().split(FIELDSEPARATER)

        #header row
        Column = len(headers)
       
        #initialised the map
        for column in headers:
            self.data[column]=Field.Field(column.rstrip())
        Logger.LogMessage("No of column:        " + str(Column))
        
        #process every row
        RowCount = 0
        ErrorRow = 0
        for rowstring in reader:
            RowCount = RowCount + 1
            rowCol = rowstring.count(FIELDSEPARATER) + 1
            row = rowstring.split(FIELDSEPARATER)
            #need to check value of last col
            #if it has "," it means there are more delimiter in text file
            if rowCol == Column:
                i =0
                for column in row:
                    (self.data[headers[i]]).addValue(column)
                    i = i + 1
            else:
                Logger.LogMessage("Row number " + str(RowCount) + " is not valid. It has " + str(rowCol) + " columns.")
                ErrorRow = ErrorRow + 1
                   
        Logger.LogMessage("No of Rows processed(Other than header file):"+str(RowCount))
        Logger.LogMessage("Total Errors:                                "+str(ErrorRow))
        reader.close();
        
        
        name, extention = onlyName.split(".")
        createFile = "create table " +  name + "("
        fieldCount = 1
        
        for k,v in self.data.iteritems():
            #mark end of processing and take profiled output for each field
            v.endOfLoading()
            self.profileList.append([fileName, v.FieldName, v.FieldString, str(v.FieldSize), str(v.RecordSize), str(v.NullValues), str(v.UniqueValues)])
            if fieldCount > 1:
                createFile = createFile + ", "
            createFile = createFile + v.FieldName + " varchar(1000) "
            
            fieldCount = fieldCount + 1
            v.clear()

        createFile = createFile + ")"
        Logger.LogMessage("Sql for file creation:" + createFile)
        SQLServerConnection.cursor.execute(createFile)
        SQLServerConnection.conn.commit()
        
        bulkcopySQL = "BULK INSERT " + name + " FROM '" + fileName + "' WITH ( FIELDTERMINATOR='\t',FIRSTROW=2,ROWTERMINATOR='" + chr(10) + "')"
        Logger.LogMessage("running bluk copy:" + bulkcopySQL)
        SQLServerConnection.cursor.execute(bulkcopySQL)
        SQLServerConnection.conn.commit()
        
        del self.data
        
        

    def iterateFolder(self,dir):
        fileexp = re.compile(r'\w*\.csv')
        
        #iterate for the directory
        for f in os.listdir(dir):
            
            if os.path.isfile(os.path.join(dir,f)) and fileexp.match(f) is not None:
                Logger.LogMessage("***************************************")
                Logger.LogMessage("FileName:             " + f)
                Logger.LogMessage("Directory Name:      " + dir)
                self.readcvsFile(os.path.join(dir,f), f)
                Logger.LogMessage("*************************************")
                Logger.LogMessage(str(datetime.datetime.now()))
                Logger.LogMessage(" ")
                print "Processing " , f
               
            
                
                Logger.flush()
            elif os.path.isdir(os.path.join(os.getcwd(),f)): 
                print os.path.join(dir,f)
                self.iterateFolder(os.path.join(dir,f))
          
                
    
    def reportProfile(self):
        
        for detail in self.profileList:
            self.profileOutput.write(','.join(detail))
            self.profileOutput.write('\n')
            


Field.py:

from types import *
import re
import datetime
import Logger

class FieldTypes:
    Null,Integer, Float, Varchar, Date, DateTime = range(6)
    intexp = re.compile('[0-9]+$')
    floatexp = re.compile('[0-9]*\.[0-9]+$')
    dateexp = re.compile(r'[0-9]{1,4}([.,/-])[0-9]{1,2}\1[0-9]{1,4}$')
    
    
     #this functions returns corresponding string for enum type 
    def getEnumString(self, type):
        if type == self.Integer: 
            return "Integer"
        if type is self.Float:
            return "numeric"
        if type is self.Varchar:
            return "varchar"
        if type is self.Date:
            return "date"
        if type is self.Null:
            return "null"
    

 #this function infers datatype using combination of new value and calculated datatype so far    
    def fieldType(self,value, previousType):

        if len(value) is 0:
            return previousType
        
        if previousType is self.Float:
            if self.floatexp.match(value) is not None:
                return self.Float
            
        if previousType is self.Date:
            if self.dateexp.match(value) is not None:
                return self.Date
            
        if previousType is self.Null:
            if self.intexp.match(value) is not None: 
                return self.Integer
            if self.floatexp.match(value) is not None:
                return self.Float
            if self.dateexp.match(value) is not None:
                return self.Date
            
        if previousType is self.Integer:
            if self.intexp.match(value) is not None: 
                return self.Integer
            if self.floatexp.match(value) is not None:
                return self.Float
            if self.dateexp.match(value) is not None:
                return self.Date

        return self.Varchar
            
          
    def getFieldType(self,valueList):
        Type = self.Null
        for value in valueList:
            if Type is self.Varchar:
                break
            Type = self.fieldType(value, Type)
        del valueList
        return Type
            

#Field class represent a complete field domain in the table. It has all properties
#related to a field like name, datatype, size, min value etc. It has contains all 
#list of values for that particular field.
class Field:
   
    def __init__(self, name, storeValue=0):
        self.FieldName = name
        self.FieldSize = 0
        self.FieldType = FieldTypes.Null
        self.FieldValues = set()
        self.RecordSize = 0
        self.UniqueValues = 0
        self.FieldString = ""
        self.NullValues = 0
        self.storeVal = storeValue
       
    def addValue(se1lf,val):
        
        self.RecordSize = self.RecordSize +1
        #print self.FieldValues
        if len(val) is 0:
            self.NullValues = self.NullValues + 1
        else:
            if self.storeVal is not 0:
                self.FieldValues.add((val.rstrip()))
            if len(val) > self.FieldSize:
                self.FieldSize = len(val)
        
    def clear(self):
        del self.FieldValues

    
    def endOfLoading(self):
        if self.storeVal is not 0:
            self.FieldType = FieldTypes().getFieldType(self.FieldValues)
            self.UniqueValues = len(self.FieldValues)
            self.FieldString = FieldTypes().getEnumString(self.FieldType)
        
    def printSummary(self):
        Logger.LogMessage(self.FieldName + " " +  self.FieldString+ " " +str(self.FieldSize)+ str(self.RecordSize)+ str(self.NullValues)+ str(self.UniqueValues)) 
        
    def printValues(self):
        for value in self.FieldValues:
            print value
            


[ Read More ]
Read more...

Python script to Load Configuration file content into variable

Posted by Raju Gupta at 4:13 AM – 1 comments
 

Module Name : LoadCfg
Parameters : Input File
Output : Dictionary Variable with all the configuration values in organised way

Configuration File Format
-----------------------------------
config.ini
PARAM1 = VALUE1
PARAM2 = VALUE2
.
.
.

Example:
Cricket_config.ini
Captain = Dhoni
Vice Captain = Shewag
Coach = Gary Kristen
Team = India

teamInfo = LoadCfg("Cricket_config.ini")
teamInfo["Captain"] will Give Dhoni
teamInfo["Team"] will Give India


def LoadCfg(filename):
   result=[]
   fileptr = open(filename,"r")
   for line in fileptr:
        spt_str=line.split("=")
        spt_str[0]= spt_str[0].strip()
        spt_str[1]= spt_str[1].strip('\n')
        spt_str[1]= spt_str[1].strip()
        spt_str[1]= spt_str[1].strip('"')
        result.append(spt_str)
   fileptr.close()
   return dict(result)

[ Read More ]
Read more...
Thursday, October 18, 2012

Python Script to search file for specific content

Posted by Raju Gupta at 5:30 PM – 1 comments
 

Module : SearchFile
Parameters : input file , string to be searched
output : 1 - > if the string founds inside the file , 0 -> if the string doesnot present in the file

This Modules opens the input file and scan line by line for the specified string . If the particular string is found, the module returns 1. If the particular string is not found it returns 0

Example:
if cricket.txt file contains
Sachin
Shewag
Dhoni

SearchFile("cricket.txt","Sachin") will return 1
SearchFile("cricket.txt","Sania") will return 0

The key needs to be searched is case sensitive,
SearchFile("cricket.txt","sachin") will return 0   



def SearchFile(filename,key):
   fileptr =  open(filename, "r")
   for line in fileptr:
      if string.find(line,key) >=0 :
            fileptr.close()
            return 1
   fileptr.close()
   return 0


[ Read More ]
Read more...

Network Wise File Concatenation

Posted by Raju Gupta at 4:06 AM – 0 comments
 
This is the python script to concatenate the files by network wise. It will consider the first file as destination file and append the content of second file to destination and so on ... until the size of the destination file exceeds the value( configurable) and moves the distnation file into some other location and deletes the source files.


#!/usr/bin/env python

import sys, os, fnmatch, re
from threading import Thread
from threading import BoundedSemaphore

pool_sema = BoundedSemaphore(value=20)  # only 10 can run at a time

maxsize = 200    # 10 lines for testing

source_directory = "/ranger/RangerRoot/RangerData/CDRDelegatorData/"
destination_directory = "/ranger/RangerRoot/RangerData/CDRDelegatorDataPerf/"
log_directory = "/ranger/RangerRoot/LOG/"
Success_Directory = "/ranger/RangerRoot/RangerData/CDRDelegatorData/success/"
Success_Directory_Perf = "/ranger/RangerRoot/RangerData/CDRDelegatorDataPerf/success/"

filters = []
os.system('date')
for i in range(1, 24):
    filters.append("%02d*" % i)
filters.append("90*")
print filters

def fnumlines(filename):
    f = open(filename)
    return f.read().count("\n")

class PROCESS(Thread):
    def __init__(self, filelist, pool_sema, network_id):
        Thread.__init__(self)
        self.list = filelist
        self.pool_sema = pool_sema
 self.network_id = network_id

    def run(self):
        self.pool_sema.acquire()
 filename = self.list[0]
 PerfSuccessFile = filename
        logFile = re.sub("[*]","",self.network_id) + "CDRDelegatorConcatenation.log"
        LogFullName = os.path.join(log_directory, logFile)
        log = open ( LogFullName ,"a")
        file = os.path.join(source_directory, filename)
 os.system('cp %s /ranger/ravi/subscriber/' % (file))
 rm_success = os.path.join(Success_Directory,filename)
 filename = file
 touch_Perf = os.path.join(Success_Directory_Perf, PerfSuccessFile )
 numlines = fnumlines(file)
 try:
           f = open(file, "a")
           for filename_temp in self.list[1:]:
                file = os.path.join(source_directory, filename_temp)
  os.system('cp %s /ranger/ravi/subscriber/' % (file))
  rm_success_file = os.path.join(Success_Directory, filename_temp)
                log_file = self.network_id + "date.log"
                log_date = os.path.join(log_directory, log_file)
                if numlines < maxsize:
                        try:
                                os.system('date > %s' % ( log_date))
                                log.write(open(log_date).read())
                                log.write("%s is concatenating to %s  \n " % (file,filename))
                        except IOError:
                                log.write("Error: can\'t find file or write the data \n")
                        try:
                                f.write(open(file).read())
                        except IOError:
                                os.system('date > %s' % ( log_date ))
                                log.write(open(log_date).read())
                                log.write("Error occured while reading the file %s \n" % (file))
                        else:
                                log.write("%s has been concatenated to %s ... DONE\n" % (file,filename))

                        numlines += fnumlines(file)
                        os.system('rm %s' %(file) )
   os.system('rm %s' %(rm_success_file))
                else:
                        f.close()
                        # move the file filename
                        
   os.system('mv %s %s' %(filename, destination_directory))
   touch_Perf = os.path.join(Success_Directory_Perf, PerfSuccessFile)
   touch_file = open(touch_Perf,"w")
   print touch_Perf
                 touch_file.close()
   PerfSuccessFile = filename_temp
   os.system('rm %s' % (rm_success))
                        # Use a new file
                        filename = os.path.join(source_directory, file)
   rm_success = rm_success_file
                        numlines = fnumlines(file)
                        f = open(file, "a")
   filename = file
 finally:
    f.close()
    log.close()
       os.system('mv %s %s' %(filename, destination_directory))
    touch_Perf = os.path.join(Success_Directory_Perf, PerfSuccessFile )
           touch_file = open(touch_Perf,"w")
    print touch_Perf
    touch_file.close()  
    os.system('rm %s' % (rm_success))
           self.pool_sema.release()

def derectory_listing(directory):
 flist = os.listdir(directory)
 for i in range(len(flist)):
  full_path = os.path.join(Success_Directory,flist[i])
  statinfo = os.stat(full_path)
         flist[i] = statinfo.st_mtime,flist[i]
 flist.sort()
 x = []
 for i in range(len(flist)):
         x.append(flist[i][1])
 return x


#allfiles =  os.listdir(sys.argv[1])
allfiles = derectory_listing(sys.argv[1])
threadlist = []
for filter in filters:
    files = fnmatch.filter(allfiles, filter)
    if not files:
        continue
    thread = PROCESS(files, pool_sema, filter)
    threadlist.append(thread)
    thread.start()

for thread in threadlist:
    thread.join()

os.system('date')


#Usage: ./thread_merge_exception.py "Directory Name"

[ Read More ]
Read more...
Wednesday, October 17, 2012

Python Script for Adding or Subtracting the Dates

Posted by Raju Gupta at 6:00 PM – 0 comments
 
Using Python Script we add and subtract the Dates in VMS platform

#----------------------------- 
# Adding to or Subtracting from a Date
# Use the rather nice datetime.timedelta objects

now = datetime.date(2003, 8, 6)
difference1 = datetime.timedelta(days=1)
difference2 = datetime.timedelta(weeks=-2)

print "One day in the future is:", now + difference1
#=> One day in the future is: 2003-08-07

print "Two weeks in the past is:", now + difference2
#=> Two weeks in the past is: 2003-07-23

print datetime.date(2003, 8, 6) - datetime.date(2000, 8, 6)
#=> 1095 days, 0:00:00

#----------------------------- 
birthtime = datetime.datetime(1973, 01, 18, 3, 45, 50)   # 1973-01-18 03:45:50

interval = datetime.timedelta(seconds=5, minutes=17, hours=2, days=55) 
then = birthtime + interval

print "Then is", then.ctime()
#=> Then is Wed Mar 14 06:02:55 1973

print "Then is", then.strftime("%A %B %d %I:%M:%S %p %Y")
#=> Then is Wednesday March 14 06:02:55 AM 1973

#-----------------------------
when = datetime.datetime(1973, 1, 18) + datetime.timedelta(days=55) 
print "Nat was 55 days old on:", when.strftime("%m/%d/%Y").lstrip("0")
#=> Nat was 55 days old on: 3/14/1973


[ Read More ]
Read more...

Program to create make file in VMS/UNIX

Posted by Raju Gupta at 3:58 AM – 1 comments
 

This program will used to create a make file in either VMS or UNIX platform. We can use this as reusable module to import in some other program to generate make file in both platforms.


#Program to create executable in either VMS or UNIX platform:
import osimport sys
def make(project_name, param = ""):    """Builds a project with unit test directives on.        - project_name   the name of the project.    - param          (optional) but can be set to any parameter (taken by                     make.com) for VMS and only "nodebug" for UNIX.    """        project_name = project_name.lower()        if  sys.platform == "OpenVMS":        option = ''        if  param:            option = ' /' + param        make_command = '$ pre_c_defines = "define=(""UNITTEST"")"\n' \                       '$ pre_cxx_defines = "define=(""UNITTEST"")"\n' \                       '$ sym_comp_c_defines = "/define=UNITTEST"\n' \                       '$ sym_comp_cxx_defines = ' \                           '"/define=(UNITTEST,__USE_STD_IOSTREAM)"\n' \                       '$ make ' + project_name + option + '\n'    else:  # UNIX platform        option = ''        if  param is "nodebug" and isdebug():            # This is the same as release command which is currently undefined.            option = 'unset sym_debug;. setdef.ksh > /dev/null\n'        make_command = option + \                       'export sym_proc_parm="define=UNITTEST"\n' \                       'export sym_comp_parm="-DUNITTEST"\n' \                       'rmake ' + project_name    os.system(make_command)    return

[ Read More ]
Read more...
Tuesday, October 16, 2012

Python Code for creating Screen saver

Posted by Raju Gupta at 2:30 PM – 2 comments
 
This is the Python code for creating a screensaver of a moving ball. The ball will bee moving along the screen and when it hits any of the edges it will bounce back. If Python is installed in your mobile you can edit this code and create your own screen savers.

import appuifw
from graphics import *
import e32
from key_codes import *

class Keyboard(object):
    def __init__(self,onevent=lambda:None):
        self._keyboard_state={}
        self._downs={}
        self._onevent=onevent
    def handle_event(self,event):
        if event['type'] == appuifw.EEventKeyDown:
            code=event['scancode']
            if not self.is_down(code):
                self._downs[code]=self._downs.get(code,0)+1
            self._keyboard_state[code]=1
        elif event['type'] == appuifw.EEventKeyUp:
            self._keyboard_state[event['scancode']]=0
        self._onevent()
    def is_down(self,scancode):
        return self._keyboard_state.get(scancode,0)
    def pressed(self,scancode):
        if self._downs.get(scancode,0):
            self._downs[scancode]-=1
            return True
        return False
keyboard=Keyboard()


appuifw.app.screen='full'
img=None
def handle_redraw(rect):
    if img:
        canvas.blit(img)
appuifw.app.body=canvas=appuifw.Canvas(
    event_callback=keyboard.handle_event,
    redraw_callback=handle_redraw)
img=Image.new(canvas.size)


running=1
def quit():
    global running
    running=0
appuifw.app.exit_key_handler=quit

location=[img.size[0]/2,img.size[1]/2]
speed=[0.,0.]
blobsize=16
xs,ys=img.size[0]-blobsize,img.size[1]-blobsize
gravity=0.03
acceleration=0.05



import time
start_time=time.clock()
n_frames=0

labeltext=u'Use arrows to move ball'
textrect=img.measure_text(labeltext, font='normal')[0]
text_img=Image.new((textrect[2]-textrect[0],textrect[3]-textrect[1]))
text_img.clear(0)
text_img.text((-textrect[0],-textrect[1]),labeltext,fill=0xffffff,font='normal')



while running:
    img.clear(0)
    img.blit(text_img, (0,0))
    img.point((location[0]+blobsize/2,location[1]+blobsize/2),
              0x00ff00,width=blobsize)
    handle_redraw(())
    e32.ao_yield()
    speed[0]*=0.999
    speed[1]*=0.999
    speed[1]+=gravity
    location[0]+=speed[0]
    location[1]+=speed[1]
    if location[0]>xs:
        location[0]=xs-(location[0]-xs)
        speed[0]=-0.80*speed[0]
        speed[1]=0.90*speed[1]
    if location[0]<0:
        location[0]=-location[0]
        speed[0]=-0.80*speed[0]
        speed[1]=0.90*speed[1]
    if location[1]>ys:
        location[1]=ys-(location[1]-ys)
        speed[0]=0.90*speed[0]
        speed[1]=-0.80*speed[1]
    if location[1]<0:
        location[1]=-location[1]
        speed[0]=0.90*speed[0]
        speed[1]=-0.80*speed[1]
        
    if keyboard.is_down(EScancodeLeftArrow):  speed[0] -= acceleration
    if keyboard.is_down(EScancodeRightArrow): speed[0] += acceleration
    if keyboard.is_down(EScancodeDownArrow):  speed[1] += acceleration
    if keyboard.is_down(EScancodeUpArrow):    speed[1] -= acceleration
    if keyboard.pressed(EScancodeHash):
        filename=u'e:\\screenshot.png'
        canvas.text((0,32),u'Saving screenshot to:',fill=0xffff00)
        canvas.text((0,48),filename,fill=0xffff00)
        img.save(filename)

    n_frames+=1
end_time=time.clock()
total=end_time-start_time

print "%d frames, %f seconds, %f FPS, %f ms/frame."%(n_frames,total,
                                                     n_frames/total,
                                                     total/n_frames*1000.)


[ Read More ]
Read more...

Run Application from host

Posted by Raju Gupta at 3:49 AM – 1 comments
 

import os
os.execl( "c:/Windows/Notepad.exe", "c:/userlog.txt")
print "Running notepad"


#or

import subprocess
subprocess.call("c:/Windows/Notepad.exe")
print "Running notepad"



#starting a Web browser:

import os
os.system('/usr/bin/firefox')
os.system(r'c:\"Program Files"\"Mozilla Firefox"\firefox.exe')


#Windows specific function os.startfile:

import os
os.startfile(r' c:\Program Files\Mozilla Firefox\firefox.exe')


[ Read More ]
Read more...
Monday, October 15, 2012

Multithreading in Python

Posted by Raju Gupta at 2:30 PM – 0 comments
 

This code will give you clear view of how python programming is useful in threading compare to other programming language.


import threading,Queue,time,sys,traceback

#Globals (start with a captial letter)
Qin  = Queue.Queue() 
Qout = Queue.Queue()
Qerr = Queue.Queue()
Pool = []   

def err_msg():
    trace= sys.exc_info()[2]
    try:
        exc_value=str(sys.exc_value)
    except:
        exc_value=''
    return str(traceback.format_tb(trace)),str(sys.exc_type),exc_value

def get_errors():
    try:
        while 1:
            yield Qerr.get_nowait()
    except Queue.Empty:
        pass

def process_queue():
    flag='ok'
    while flag !='stop':
        try:
            flag,item=Qin.get() #will wait here!
            if flag=='ok':
                newdata='new'+item
                Qout.put(newdata)
        except:
            Qerr.put(err_msg())
            
def start_threads(amount=5):
    for i in range(amount):
         thread = threading.Thread(target=process_queue)
         thread.start()
         Pool.append(thread)
def put(data,flag='ok'):
    Qin.put([flag,data]) 

def get(): return Qout.get() #will wait here!

def get_all():
    try:
        while 1:
            yield Qout.get_nowait()
    except Queue.Empty:
        pass
def stop_threads():
    for i in range(len(Pool)):
        Qin.put(('stop',None))
    while Pool:
        time.sleep(1)
        for index,the_thread in enumerate(Pool):
            if the_thread.isAlive():
                continue
            else:
                del Pool[index]
            break
#STANDARD use:
for i in ('b','c'): put(i)
start_threads()
stop_threads()
for i in get_all(): print i
for i in get_errors(): print i

#POOL use
#put element into input queue
put('a')

#setup threads -- will run forever as a pool until you shutdown
start_threads() 

for i in ('b','c'): put(i)

#get an element from output queue
print get() 

#put even more data in, 7 causes an error
for i in ('d','e',7): put(i)
#get whatever is available
for i in get_all(): print i

#stop_threads only returns when all threads have stopped
stop_threads()
print '__threads finished last data available__'
for i in get_all(): print i
for i in get_errors(): print i
#starting up threads again
start_threads()
put('f')
stop_threads()
print '__threads finished(again) last data available__'
for i in get_all(): print i
for i in get_errors(): print i

 

[ Read More ]
Read more...

This utility determines the size of a folder and its subfolders in MB

Posted by Raju Gupta at 3:41 AM – 0 comments
 

This simple Python snippet will list the size of all the subfolders in a parent folder in MB.This is developed by using a python utility OS.


# determine size of a given folder in MBytes

import os

# pick a folder you have ...
folder = 'D:\\zz1'
folder_size = 0
for (path, dirs, files) in os.walk(folder):
  for file in files:
    filename = os.path.join(path, file)
    folder_size += os.path.getsize(filename)

print "Folder = %0.1f MB" % (folder_size/(1024*1024.0))

[ Read More ]
Read more...
Sunday, October 14, 2012

Using UNIX Environment Variables From Python

Posted by Raju Gupta at 4:25 AM – 1 comments
 

Using environ from os package dose not work well when unsetting the variables. So write the unix commands in a korne shell script file and run it from python solves the issue.

import os
command = "unset TEST " + "\n" \
          "export GLOBAL_VALUE='global_value'" + "\n" \
                    "sample\n"
scr_name = "temp.tmp"
scr_file = open(scr_name, 'w') #write everything to a
scr_file.write(command)
scr_file.close()
os.chmod(scr_name, 0744) # mode -rwxr--r--
status = os.system(scr_name)
os.remove(scr_name)

[ Read More ]
Read more...

CVS Macro for Commited Files

Posted by Raju Gupta at 3:41 AM – 0 comments
 

This utility gets all the files committed by a particulare user in CVS after a specified date and copies these files into a desired location with original data structure path.

The original data structure path always starts with c:\work\PW_Portal.

from cvsgui.Macro import *
from cvsgui.CvsEntry import *
from cvsgui.ColorConsole import *
import cvsgui.App
import os.path, os, shutil
import zipfile, string
import re

from cvsgui.Persistent import *
from cvsgui.MenuMgr import *
from cvsgui.SafeTk import *


class MyConfig(Macro):
 def __init__(self):
  Macro.__init__(self, "My Utility", MACRO_SELECTION,
   0, "My Menu")

  self.m_path = Persistent("PY_PATH", "select the destination folder", 1)
  self.m_date = Persistent("PY_DATE", "dd-mmm-yyyy", 1)
                self.m_name = Persistent("PY_NAME","name",1)  
 def OnCmdUI(self, cmdui):
  cmdui.Enable(1)

 def Run(self):
  msg = "Please enter the path for delivery :"
  msg1 = "Please enter the Date from when you want modified files :"
                msg2 = "Please enter the user name whose commited files have to be created in directory structure"
  title = "Path"  
  outputpath = str(self.m_path)
  datestr = str(self.m_date)
                namestr = str(self.m_name)
  
  res, outputpath = cvsgui.App.PromptEditMessage(msg, outputpath, title)

  

  if res and len(outputpath) > 0:
   self.m_path << outputpath
    
  res,datestr = cvsgui.App.PromptEditMessage(msg1, datestr, title)
  if res and len(datestr) > 0:
   self.m_date << datestr

  res,namestr = cvsgui.App.PromptEditMessage(msg2, namestr, title)
  if res and len(namestr) > 0:
   self.m_name << namestr
                cvs = Cvs(1)
                console = ColorConsole()
                console << kMagenta << datestr << "\n" << kNormal
                console << kMagenta << "Username:" << namestr << "\n" << kNormal
                
  okCancel = _cvsgui.PromptMessage("Shall I Continue", "Message", "OK", "Cancel")
  if okCancel == 0:                        
                        return

                    
                code, out, err = cvs.Run("history", "-xACM", "-D%s" % datestr, "-u%s" % namestr)
                lines= string.split(out, '\n')
                console << kMagenta << "Copying files : " <<"\n"
                for line in lines:
                    #console << kMagenta << line << "\n" << kNormal  
                    mobj = re.match( "^[MCA] \d\d\d\d-\d\d-\d\d \d\d:\d\d \+0000 [a-z_]+ +[\d.]+ +(.*) +(PW_Portal.*) +==", 

line)
                    if mobj:
                        file_name = mobj.group(1).strip()
                        dir_name = mobj.group(2).strip()
                        #console << kBlue << dir_name << "/" << file_name << "\n" << kNormal
                    srcname = os.path.join("c:/work/", dir_name + '/'+ file_name)
                    targetname = os.path.join(outputpath, "delivery"+ '/'+ dir_name)
                    if not os.path.exists(targetname):
                        os.makedirs(targetname)   
                    shutil.copy(srcname, targetname)
                    console << kGreen << srcname << " to " << targetname <<"\n" << kNormal
MyConfig()


[ Read More ]
Read more...

Generating unique id's

Posted by Raju Gupta at 3:38 AM – 0 comments
 

The script is developed in python.It sets unique identifier for every database.It can be reused to set unique ids (alphanumeric values) to any set of data.


def generate_grpid(dbname):    
   
    dict = {'pdb':0,
            'vast':1,
            'taxonomy':2,
     'genbank':3,
            'dbSNP':4,
            'enzyme':5,          
            'uniprot':6,
            'go':7,
            'interpro':8,
            'rebase':9,
            'prosite':10
            'pdb-hetero':11
           }

    x = dict[dbname]
    # 'x' is the unique id returned.
    return x 


[ Read More ]
Read more...
Newer Posts Older Posts
Subscribe to: Posts (Atom)
  • Popular
  • Recent
  • Archives

Popular Posts

  • To Send the entire contents of directory as an email Message.
    Here is a Python Program to send the entire contents of a directory as an email message #!/usr/bin/env python """Send the...
  • Control Structures in Python
    This Program explains different control structures in python If ==== # basic "If" if boolean: x = 1 else: x = 0 # No...
  • Python Code for creating Screen saver
    This is the Python code for creating a screensaver of a moving ball. The ball will bee moving along the screen and when it hits any of the ...
  • Python script for walking the directory tree structure but excluding some directories and files
    The script walks the directory structure recursively. Some of the directories and files can be excluded from the walk by providing them in ...
  • XML to CSV Converter
    Python Program to extract each execution data (few children in xml) and create a csv file. def xml2csv(inputFile,outputFile,elemName,n...
  • HTML Email using Python
    We created a module called sendmail which email html content. Each of application dumps data from the datastructure into html format and th...
  • Overview of Python Programming Language
    Python is an easy to learn, powerful programming language. It has efficient high level data structures and a simple but effective approac...
  • Fast and efficient Backup script that works on Windows and Linux
    This python script backup your directory structure that works on Linux and Windows both. It directly uses the shell to do this task and she...
  • Run Application from host
    import os os.execl( "c:/Windows/Notepad.exe", "c:/userlog.txt") print "Running notepad" #or import subpro...
  • Orphan Checker
    The script does not use any outside libraries, thus keeping its execution simple. we will start by initializing the arrays that we'll...
Powered by Blogger.

Archives

  • ▼  2012 (66)
    • ▼  October (28)
      • Find text/pattern under parent folder from matched...
      • XML to CSV Converter
      • XSL to CSV converter
      • Spell Corrector in Python
      • Format a plain ASCII file using Python script
      • Control Structures in Python
      • Python script to zip and unzip files
      • Fast and efficient Backup script that works on Win...
      • File Splitter Script
      • Python script for walking the directory tree struc...
      • Python XML Parser
      • IP Address Generation
      • Orphan Checker
      • HTML Email using Python
      • Python interface to load commands.
      • Python script to Validate Tabular format Data and ...
      • Python script to Load Configuration file content i...
      • Python Script to search file for specific content
      • Network Wise File Concatenation
      • Python Script for Adding or Subtracting the Dates
      • Program to create make file in VMS/UNIX
      • Python Code for creating Screen saver
      • Run Application from host
      • Multithreading in Python
      • This utility determines the size of a folder and i...
      • Using UNIX Environment Variables From Python
      • CVS Macro for Commited Files
      • Generating unique id's
    • ►  September (36)
    • ►  March (1)
    • ►  February (1)
 

Labels

  • Command Execution Using Python (2)
  • Control Structure (1)
  • CSV Examples (4)
  • Database (3)
  • Features of Python Programming Language (1)
  • File Example (2)
  • IF Statement (1)
  • List (2)
  • Mail (5)
  • Multithreading (1)
  • MySQL (3)
  • Python Date Example (2)
  • Python Example (2)
  • Python File Example (7)
  • Python Introduction (1)
  • Python Network Example (1)
  • Python XML Parser (2)
  • Searching Example (2)
  • Simple Calculation Program (1)
  • Strings (4)
  • Zip File Example (1)

Followers

 
 
© 2011 Python Programs and Examples | Designs by Web2feel & Fab Themes

Bloggerized by DheTemplate.com - Main Blogger