A new version of Python template maker


I developed a script early for dressing pages with demanded template here is newer version:

from BeautifulSoup import BeautifulSoup
#Configuration
root="D:/users/suat.atan/Desktop/yenisehrivan/"
cssroot="/htmlres/css/"
jsroot="/htmlres/js/"

class TemplateMaker():
def generate(self,NUDEPAGE_NAME):
NUDEPAGE=open(root+"pages/"+NUDEPAGE_NAME+".html").read()
index=open(root+"pages/index.html").read()
soup = BeautifulSoup(index)
head= soup.findAll("head")
#header
header_1=soup.findAll("div",id="header")
#menu
header_2=soup.findAll("div",id="topmenu")
#change css
css=soup.findAll("link",id="maincss")
css[0]['href']=cssroot+NUDEPAGE_NAME+".css"
#change js
js=soup.findAll("script",id="mainjs")
js[0]['src']=jsroot+NUDEPAGE_NAME+".js"
HEADER=str(header_1[0])+str(header_2[0])
HEAD= str(head[0])
html1="""n
"""
html2="""n

“”“ html3=”

n

n"
sign=""
HTML=html1+HEAD+html2+HEADER+NUDEPAGE+html3
file =open(root+"pages/gen-"+NUDEPAGE_NAME+".html","w")
file.write(HTML)
file.close()
print "--Generated "+NUDEPAGE_NAME
return True

tm=TemplateMaker()
tm.generate("detail")
Reklamlar

Dreamweaver-like html template dressing in Aptana with Python


The best way for dressing multiple page with a single template is Dreamweaver HTML template system. This system is ok but not available in Aptana.  For this problem i coded a simple python script that can launchable under Aptana. This script reading head elements of main page for css and javascript’s after this asks the zero-templated(nude) page name and gets content from zero-templated page and combines template and nude page. After that writing templated page to a html file.

For using this script you must download BeatifulPython script and import it.

from BeautifulSoup import BeautifulSoup
root="D:/users/suat.atan/Desktop/myproject/"
index=open(root+"pages/index.html").read()
soup = BeautifulSoup(index)
head= soup.findAll("head")
HEAD= head[0]
html1=""""""
html2="""""" body_name=raw_input("Template page: (withoyut .html expression)-->") body=open(root+"pages/"+body_name+".html").read() html3=""
HTML=html1+str(HEAD)+html2+body+html3
file =open(root+"pages/new_template.html","w")
file.write(HTML)
file.close()
print "Generated"

Getting url variables quickly in Google App Engine


Normally, getting url variables in Google App Engine Python is a line of code:

param=self.request.get(‘url_param’);

print param;

But i just discovered that it can also be, with a quick and easy method:

def get(self, url_param)

print url_param;

We can define url param that page work with it, in the get() function params. After that we can use it directly…

Blueprint typography class


Blueprint css framework also have the typography classes for quick and stable styling texts:

 

.small 0.8em; line 1.875
.large 1.2em; line 2.5
.hide display: none
.quiet color 666 (grey) // Quiet is very useful for notificating not important notes
.loud color 000 (black)
.highlight bg ff0 (yellow) //Hightlight is very useful notificatinf imortant notes and search results including words
.added bg 060 (green) // For striking newly added items
.removed bg 900 (red)
.first mL 0; pL 0
.last mR 0; pR 0
.top mT 0; pT 0
.bottom mB 0; pB 0

Print as json format in Google App Engine


When you want print a content you using “self.response.out.write()”. But this function can’t print json format. But you can print a some changes in this function:

Firslty import simplejson library:

from django.utils import simplejson

And print json:

my_response = {'status':'ok' ,
'message':'Suat ATAN Blog suatatan.wordpresscom'}
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write(simplejson.dumps(my_response))

You can process this reponse with Jquery:

$(document).ready(function(){
$('input[type=submit]').attr("onclick","return false");
var action_url = $("#ajaxform").attr("action");

$('input[type=submit]').click(function(){
var formdata = $('form').serialize();
$.post(action_url, formdata,
function(response){
$("#formalert").html(response.message);
}, 'json');
})
});

Enjoy !

Getting Flickr Photoset List with cover image with Jquery


Flickr API offers getting photoset list with simple API calling but response isn’t include the photoset cover image. For resolving this problem, i have wrote a javascript block:

Code is here:

/*
 
flickr-json-sets-cover.js
Coded By Suat ATAN
Flickr API connection over JSON
06HAZ2011-14:43


* All rihts reserved
* */

var api_key="your_api_key";
var user_id="64925203@N02";

function f_getcoverimage(photoset_id){

$.ajax({
type: 'GET',
url: "http://api.flickr.com/services/rest/?method=flickr.photosets.getPhotos&api_key="+api_key+"&photoset_id="+photoset_id+"+&extras=url_m%2Curl_t%2Curl_sq%2Curl_s&format=json&nojsoncallback=1",
dataType: "json",
success: function(jsondata){
//gets first thumbnail
obj=jsondata.photoset.photo[0];
var title=obj.title;

var url_m=obj.url_m;
var url_s=obj.url_s;
var url_sq=obj.url_sq;
var url_t=obj.url_t;
$("a#"+photoset_id).prev(".imgholder").html("Flickr");



}
});



}


//
function f_getlist(){


$.ajax({
type: 'GET',
url: "http://api.flickr.com/services/rest/?method=flickr.photosets.getList&api_key="+api_key+"&user_id="+user_id+"&format=json&nojsoncallback=1",
dataType: "json",
success: function(jsondata){

$.each((jsondata.photosets.photoset), function(i,set){
var title=set.title._content;
var id=set.id;
f_getcoverimage(id);


$("#list").append("
"+title+ "
");

});

}
});

}
//



$(document).ready(function(){

myc=new f_getlist();




});

HTML file:




http://www.google.com/jsapi

google.load("jquery", "1.4.2");

http://flickr-json-sets-cover.js






This script connects your Flickr account via Flickr API, retrieves photoset list, and for each photoset uses Flickr API getPhotos function, this function response includes all image of photosets. We retrieving with  obj=jsondata.photoset.photo[0]; code, only first image as cover image.

 

Retrieve videos from a spesific Youtube user with Python


  1. Download Gdata Python client libraries from: http://code.google.com/p/gdata-python-client/downloads/list
  2. Go to “src” directory and copy them to your Python workspace. It’s possible be Google App engine workspace.
  3. import needed library to your source code. e.g:

import gdata.youtube
import gdata.youtube.service

Try these codes:

class TestPage(webapp.RequestHandler):
def get(self):
sonuc=self.GetAndPrintUserUploads("suatatanvan")
html_degerleri={'TESTDATA': sonuc }
path = os.path.join(os.path.dirname(__file__), 'pages/test.htm')
self.response.out.write(template.render(path, html_degerleri))
def GetAndPrintUserUploads(self,username):
yt_service = gdata.youtube.service.YouTubeService()
uri = 'http://gdata.youtube.com/feeds/api/users/%s/uploads' % username
feed= (yt_service.GetYouTubeVideoFeed(uri))
sonuc=""
for entry in feed.entry:

sonuc ="Title: %s

Content: %s
ID: %s
"%(entry.title.text,entry.media.thumbnail[1].url,entry.content.text,entry.id)+sonuc

return sonuc

You can call

GetYouTubeVideoFeed(uri)

function over the

yt_service = gdata.youtube.service.YouTubeService()

 Youtube service object.

 It’s uri parameter in GetYoutubeVideoFeed function is likely as this:

This url returns raw xml page that contain the user spesific uploads. And we can retrieve needed datum from it.

With for lookup:

for entry in feed.entry:
sonuc= entry.title.text+"
"+sonuc
return sonuc

Notice that feed.entry a XML node in response page that have xml nodes for each video entity.