文章详情

  • 游戏榜单
  • 软件榜单
关闭导航
热搜榜
热门下载
热门标签
php爱好者> php文档>za python

za python

时间:2010-10-06  来源:lexus

Routes is a Python re-implementation of the Rails routes system for mapping URLs to application actions, and conversely to generate URLs. Routes makes it easy to create pretty and concise URLs that are RESTful with little effort.

Routes allows conditional matching based on domain, cookies, HTTP method, or a custom function. Sub-domain support is built in. Routes comes with an extensive unit test suite.

Buzzword Compliance: REST, DRY

 

 

 

One of the simple user errors that keeps on cropping up is accidentally having multiple greenthreads reading from the same socket at the same time.  It’s a simple thing to accidentally do; just create a shared resource that contains a socket and spawn at least two greenthreads to use it:

 import eventlet
 httplib2 = eventlet.import_patched('httplib2')
 shared_resource = httplib2.Http()
 def get_url():
     resp, content = shared_resource.request("http://eventlet.net")
     return content
 p = eventlet.GreenPile()
 p.spawn(get_url)
 p.spawn(get_url)
 results = list(p)
 assert results[0] == results[1]

 

Running this with Eventlet 0.9.7 results in an httplib.IncompleteRead exception being raised. It’s because both calls to get_url are divvying up the data from the socket between them, and neither is getting the full picture.  The IncompleteRead error is pretty hard to debug — you’ll have no idea why it’s doing that, and you’ll be frustrated.

What’s new in the tip of Eventlet’s trunk is that Eventlet itself will warn you with a clear error message when you try to do this. If you run the above code with development Eventlet (see sidebar for instructions on how to get it) you now get this error instead:

RuntimeError: Second simultaneous read on fileno 3 detected.  Unless
 you really know what you're doing, make sure that only one greenthread
 can read any particular socket.  Consider using a pools.Pool. If you do know
 what you're doing and want to disable this error, call
 eventlet.debug.hub_multiple_reader_prevention(False)

 

Cool, huh? A little clearer about what exactly is going wrong here. And if you really want to do multiple readers or multiple writers on the same socket simultaneously, there’s a way to disable the protection.

Of course, the fix for this particular toy example is to have a single instance of Http() for every greenthread:

 import eventlet
 httplib2 = eventlet.import_patched('httplib2')
 def get_url():
     resp, content = httplib2.Http().request("http://eventlet.net")
     return content
 p = eventlet.GreenPile()
 p.spawn(get_url)
 p.spawn(get_url)
 results = list(p)
 assert results[0] == results[1]

 

But you probably created that shared_resource because you wanted to reuse Http() instances between requests. So you need some other way to sharing connections. This is what pools.Pool objects are for! Use them like this:

 from __future__ import with_statement
 import eventlet
 from eventlet import pools
 httplib2 = eventlet.import_patched('httplib2')

 httppool = pools.Pool()
 httppool.create = httplib2.Http

 def get_url():
     with httppool.item() as http:
         resp, content = http.request("http://eventlet.net")
         return content

 p = eventlet.GreenPile()
 p.spawn(get_url)
 p.spawn(get_url)
 results = list(p)
 assert results[0] == results[1]

The Pool class will guarantee that the Http instances are reused if possible, and that only one greenthread can access each at a time. If you’re looking for somewhat more advanced usage of this design pattern, take a look at the source code to Heroshi, a concurrent web crawler written on top of Eventlet.

 

 

 

restkit 2.2.0

Python REST kit

Downloads ↓

Latest Version: 2.2.1

About

Restkit is an HTTP resource kit for Python. It allows you to easily access to HTTP resource and build objects around it. It's the base of couchdbkit a Python CouchDB framework.

Restkit is a full HTTP client using pure socket calls and its own HTTP parser. It's not based on httplib or urllib2.

Installation

 

 

 

最近写的一个mysql读写分离的,python 小工具:angel mysql proxy 收藏

mysql 读写分离在面对一定的数据库压力下,的确有一定的帮助,读写分离方面的程序似乎也很多,mysql 官方也有个读写分离产品叫

MySQL Proxy 。  按照我理解,似乎实现基本的读写分离也不是很难。所以我按照自己的想法写了一个。目前在并发600的情况下是成功的。并发超过600后就有收割僵尸进程不 完成的情况,导致进程假死状态。这个目前有个解决想法,就是把收割程序独立出来用一个进程来专门收割,这样可以防止,收割程序在并发大阻塞下响应不及时问 题。~_~ 只是个想法。。

下载地址:我放phchina.com上了。 CSDN不知道咋传的郁闷

http://bbs.phpchina.com/viewthread.php?tid=173892

目前流程:

php客户端  -->   angel.py -- ---读写分离:读均衡 --->  mysql

angel mysql proxy 在并发模型上选择了 forking 。在均衡读上的算法目前只是随机,因为开发当时遇到个问题就是,如何把子进程使用

mysql 的状态数给保存下来传给主进程,然后主进程按照目前的各mysql 连接数均衡分配客户端读。 那会第一反应就是使用 file 来进行 进程间共享。但存在个锁问题。担心效率受损。 进而想到了 共享内存。不过找不到合适的py库。 今天才确定pipe 应该能解决这个问题 ~_~ 目前也只是个想法。。

在收割上,循环收割和信号收割。经过测试最后选择了信号收割,在并发下,似乎收割情况更好点

先看下配置文件,我直接使用了PY文件了。贪图方便

 

 

 

一般的文件系统,会综合考虑各种大小和格式的文件的读,写效率,因而对特定的文件读或写的效率不是最优。如果有必要,可以通过选择文件系统,以及修改文件 系统的配置参数来达到对特定文件的读或写的效率最大化。比如说,如果文件系统中需要存储大量的小文件,则可以使用ReiserFS[37]来替代 Linux操作系统默认的ext3系统,因为ReiserFS是基于平衡树的文件系统结构,尤其对于大量文件的巨型文件系统,搜索速度要比使用局部的二分 查找法的ext3快。 ReiserFS里的目录是完全动态分配的,因此不存在ext3中常见的无法回收巨型目录占用的磁盘空间的情况。ReiserFS里小文件(< 4K)可以直接存储进树,小文件读取和写入的速度更快,树内节点是按字节对齐的,多个小文件可共享同一个硬盘块,节约大量空间。ext3使用固定大小的块 分配策略,也就是说,不到4K的小文件也要占据4K的空间,导致的空间浪费比较严重[38]。 但ReiserFS对很多Linux内核支持的不是很好,包括2.4.3、2.4.9 甚至相对较新的 2.4.16,如果网站想要使用它,就必须要安装与它配合的较好的2.4.18内核——一般管理员都不是很乐意使用太新的内核,因为在它上面运行的软件, 都还没有经过大量的实践测试,也许有一些小的bug还没有被发现,但对于服务器来说,再小的bug也是不能接受的。ReiserFS还是一个较为年轻的, 发展迅速的文件系统,它相对于ext3来说有一个很大的缺陷就是,每次ReiserFS文件系统升级的时候,必须完全重新格式化整个磁盘分区。所以在选择 使用的时候,需要权衡取舍[39]。5 应用程序层优化

 

 

WRT

abbr. 关于, 就 ... 而论(=with regard to)

相关阅读 更多 +
排行榜 更多 +
找茬脑洞的世界安卓版

找茬脑洞的世界安卓版

休闲益智 下载
滑板英雄跑酷2手游

滑板英雄跑酷2手游

休闲益智 下载
披萨对对看下载

披萨对对看下载

休闲益智 下载