Posted
about 14 years
ago
by
Plone News
The Abstract Team is happy to invite you all in Sorrento, Italy on April 28 - May 1, 2011 for the Plone Open Garden.
After the European Symposium last year, we received very good feedbacks for the productive Open Garden session.
Many people
... [More]
appreciated a lot sharing ideas in the warm atmosphere of that lovely garden in Sorrento, and we decided to repeat the experience.
Have a look at the Open Garden's pages for any further detail. [Less]
|
Posted
about 14 years
ago
by
Fetchez le Python
tldr: I have this idea that comes back and forth in my head since a few months now, working on the Sync, Easy Setup and Identity servers. I want to create a light DL to describe what REST web services an application implements, and use it to automate
... [More]
the request dispatching but also the documenting and testing of the services.
When we create web services, there’s a lot of things we do that are quite systematic:
Describe the web services in a document.
Define what code should be executed.
Validate and transform incoming requests and outgoing responses.
Run functional and security tests on the web services.
1. Describe the web services (=API)
Documenting the web services the application needs to offer is the first step to build it. A document needs to list the URL paths, the methods to use, what goes it and comes out.
For the Easy Setup server (a.k.a the J-Pake server), I’ve documented an initial design of the web services here: https://wiki.mozilla.org/Services/Sync/SyncKey/J-PAKE#Server_API and we worked with Stefan and Philipp to refine the design iteratively.
Once the application is built, that’s a non implementation-specific description that anyone can use to build a new client that interacts with our server.
2. What code should be executed ?
The next step is to build the code that does the job. On a request, we need to define what piece of code should be called to build a response and execute it in order to return a response.
It can be a set of regexps managed by dispatcher tool, which output — That’s Routes. It can also be a simple function decorator — That’s Bottle. Some other approaches are consisting of using the code namespace to dispatch the request, like implementing the Root.get_index method for a GET on the /index url.
At Services, we use Routes and feed it with a list that contains a description of the URLs. See this example: http://hg.mozilla.org/services/server-storage/file/78762deede5d/syncstorage/wsgiapp.py#l65.
3. Validate & Filter Requests and Responses
Beside the feature code itself, the application usually do a series of validation or/and transformation on the incoming request — like checking its headers, or extracting objects from a JSON body. This can also be done on the outgoing responses.
Those steps are usually generic enough to be reused for all web services. And those steps, most of the time, should not be implemented as WSGI Middlewares — one simple question to ask yourself to know if you should create /use a middleware: does your application breaks if you remove that middleware ? If so, it means that your application may not work without the middleware, thus it should become a library on which the application has a hard dependency.
Some Examples:
an authorization function that checks the Authorization header and set in the execution context a user name.
a function that controls that the body contains parseable JSON, and unserialize it in the execution context.
a function that serialize all response bodies into JSON
4. Create functional and security tests
Like I said in 1., building a new client against the server should be possible simply by reading the documentation.
That’s how we build our functional test suite, which runs for each web service a series of requests, and checks that the responses are the ones expected. These tests are simply validating that the server acts as documented.
Depending on the security expectations, another series of tests can check for the server behavior when it receives unexpected responses.
A light, implementation-specific, DL
So, the idea would be to describe everything in a static file, that can be used:
by the application to:
automatically dispatch the requests to some callable
execute some functions before and after the main callable, to transform and filter data
by the testers to use a generic HTTP client powered by the DL file.
by the documentation to generate an HTML or Wiki version of that DL file.
After investigation, I found WADL and got scared. That’s probably the XML effect
WADL is very close to what I am looking for, see an example. But while less complex than WSDL, it still seems a bit overkill for what I want to do. I am not sure for instance, that I want to fully describe the structure of the responses. And well, WADL is only documenting the web services, and not pointing the code that implements them.
I want to do both things in the DL, and keep implementation-specific parts light enough that they will not really annoy anyone. But they’re still useful when your not in the application itself: for example, a wiki output could link to an online view of the code that’s used.
Here’s a (very quick) draft in pseudo-YAML out of my head:
POST "/the/webservice/is/here":
id: cool_service
description: The cool service does this and that.
request:
headers:
Authorization: description of the supported token etc.
body: explains here what the body should contains
response:
body: explains what the body contains
headers:
Header-1: description of that header
content-type: application/json
codes:
200: explains here what getting a success means
503: explains here when you might get a 503
400: explains here when it's a bad request
implemented_by: module.class.method,
pre-hooks:
authenticate
extract_json_from_body
post-hook:
jsonify
The first part is really, documenting your web service.
But it’s also detailed enough to be able to automatically create an HTTP client that can be the basis for tests. e.g.:
def post_cool_service(body, authorization):
"""The cool service does this and that.
Arguments:
Authorization: description of the supported token etc.
body: explains here what the body should contains
Returns: code, body, headers
"""
... generic curl-y code...
Not sure about the generative aspect though, because it’s hard to maintain. Maybe a dynamic introspection of the DL file is a better idea here…
The second part tells the server what code should be run, something that would be similar than:
def cool_service(request):
"""The cool service"""
authenticate(request)
extract_json_from_body(request)
response = module.class.method(request)
return jsonify(response)
In other word, I could strip all the boiler-plate code I have around the code that implements the features themselves, and just combine them with a few helper functions via the DL.
This approach is quite similar to Zope’s ZCML glue, but without the ZCA layer –which I tend to find heavy and overkill–
[Less]
|
Posted
about 14 years
ago
by
The PyCon blog
By Brian CurtinThe Python world has come a long way since December 2008 when 3.0 was first released. Books have been released, blogs have been written, and most importantly, projects have been ported. Recently, NumPy and SciPy checked in their
... [More]
porting work. We’ve heard rumblings of Django on 3.x, possibly as early as this summer. Python 3.1.3 was released in the fall and 3.2 final is around a week away, and with 2.7 being the end of the 2.x line, all core hands are on Python 3.Lennart Regebro knows all about this. He’s the author of a new book, Porting to Python 3, and he’s giving a talk by the same name. The idea to write the book came from a lack of published material on the topic and an interest in writing for the now defunct Python Magazine. “The lack of documentation has been the biggest hurdle, [so] if you want to port to Python 3 you have been stepping into the dark. Since I had been using Python 3 and porting to it on my free time since early 2008 I had a bit of experience to share,” says Lennart. He then took his article series and had the basis for a book which he created with reStructuredText and Sphinx.While he agrees that the separation of string contents into binary data and Unicode text was the right move, it’s a challenge you’ll have to undertake if your application doesn’t already handle all text as Unicode. “This is where you can expect the biggest problems,” he claims. Luckily he’s taking the time to cover it in his talk. He also covers the important topic of porting strategies, including branching, continuous 2to3 conversion, and single codebase projects.Asked about his hardest porting project, zope.testing appears to be the winner. The package used doctests from before they were included in the standard library, along with a custom testrunner module, so the first step was to separate and deprecate. “I think I ended up deleting the port and restarting two or three times either because I made a hash of it or the trunk code had changed so much that it was easier to restart than to merge the changes.” Although the port isn’t complete, “porting a package takes between a couple of hours and a couple of days, and is a lot of fun, except if you have a lot of doctests.”The Zope Component Architecture tops his most-wanted list and is the driving force behind his efforts. “It's really cool, but uses a lot of Python internals so porting it is a challenge,” he says, mentioning that a further complication is the need of writing fixers, for which there is little documentation.” Understanding how 2to3 works internally was another challenge, which led to a chapter in his book. Along with 2to3, Benjamin Peterson’s six package has been helping Lennart along with his porting. “I was planning to write such a module myself, but now I'm glad I didn't, because Benjamin did a much better job than I would have,” he says.Lennart is a PyCon veteran, coming to the 2008 and 2009 conferences in Chicago, along with several EuroPython events, as well as Polish and French PyCons. The evenings are some of his favorite times, “because there are so many people around you that are much smarter than you are, and are friendly and open and willing to hang with you over coffee or a drink.” Sprints were one of the highlights of his 2009 experience, where he organized a Zope sprint without hopes of a great turnout. He ended up being wrong: “we got a quite a big gathering with many of the top Zope names and had some fantastic discussions on the way forward for Zope, as well as an extremely productive sprint!” He finished the interview by saying, “that was great fun, and those type of things seem to happen a lot on PyCon.”If you’re interested in the PyCon sprints, check out the sprint page, and don’t forget to buy your tickets soon! [Less]
|
Posted
about 14 years
ago
by
Plone News
The following was written by his friend and Plone co-founder Alan Runyan and shared with the community on behalf of those who knew Dorneles, and those never lucky enough to have met him in person.
Finding out about losing Dorneles Treméa has been
... [More]
gut-wrenching. He has been one of those silent heroes in my life that you always expect to be there. He was the physical manifestation of Tranquility. It was real joy interacting with him both virtually and physically. Dorneles had tremendous patience and enthusiasm for teaching. If there was an opportunity for him to share anything he knew with someone else — he would take the time to do it.
Many people in the Python, Zope & Plone and Brazilian FOSS communities will feel the hole that Dorneles has left in our collective psyche. I knew Dorneles through the Zope & Plone communities. He always was engaged, interactive, and pushing the software and culture forward. Dorneles was a hacker. He made things better. He translated many different software packages to Portuguese (possibly responsible for the first Plone translation), wrote lots of software and spoke at many conferences. He loved problem solving and sharing.
I will deeply miss him. I will miss signing for the dozens and dozens of online purchases he would have delivered prior to his arrival in Houston. I will miss trying to order him an alcoholic beverage at a bar and him politely declining asking for juice on tap. I will miss skeet shooting with him. I'm looking at his Skype profile picture. The picture is from the last time we went shooting. His computer is still connected to Skype. His status is Away. I IM'ed, “I will miss you.”
Dorneles will be survived by his wife, Flaviane Machado and his two daughters, Helen (11) and Ingrid (8). They live in Garibaldi which is located in southern Brazil. My heart goes out to his wife and children. The only condolence I can give is that there are many places in the world where Dorneles made an impact. In this technical-based FOSS community, Dorneles was a recognized and respected member. He was a great person.
— Alan Runyan
[Less]
|
Posted
about 14 years
ago
by
RedTurtle Technology
I have been using Pyramid/BFG in several our projects. It really rocks. Probably you already know that. What I think could be an extremely useful add-on is a CRUD. For our internal usage (more as a proof-of-concept) we have developed one - traversal with SQLAlchemy support. Now we want to make it more generic and open-source. Interested in?
|
Posted
about 14 years
ago
by
Martin Aspeli
From the book writing procrastination dep't
I've recently had to step away from a great project – it was difficult to continue after having to leave the country. :-) The project goes live in a few months, but (hopefully) we managed to finished most
... [More]
of the development work.
I can't talk too much about the project itself, except to say that when it launches, it will probably be one of the highest-traffic Plone 4 sites in existence. Moreover, it's a site with long lulls followed by huge spikes in traffic, that needs to cater for both logged-in members of the public and anonymous users, with different load profiles.
I do want to talk a bit about the tools and technologies we used, in the hopes that others will find it interesting. This is easily the most sophisticated stack I've ever used in a real-world project. And on the whole, it's worked extremely well so far.
The team
Beyond myself, the team consisted of two very capable developers with experience of Plone 2.5 and through-the-web development, but limited Plone "filesystem" experience, on a full-time basis. In addition, we had the help of a network engineer and a tester, as well as a peripheral team of testers, trainers and others.
A side-goal of the project was to leave behind tools and working practices that could be applied to future development projects. As such, a lot of thought went into how the development environment was set up and how it could be reused.
Development environment
We used an agile development process centered around Scrum. To support this, we used Pivotal Tracker to manage stories and defects. This is my third attempt at using Pivotal in anger, and I'm pretty happy with it. Whilst certainly not perfect, it's simple and user friendly enough to fit into my preferred workflow, and it helps the release planning and story estimation process. That, and the price is right.
For source control, we used an internal Subversion server. This was a step up from CVS. I'm pretty confident that Subversion was the right choice: a DVCS would have been far too complicated and confusion-prone for this project.
We installed Trac on our development server as well. We didn't actually use its issue tracking capabilities (since we kept all features and defects in Pivotal), but made use of its wiki and the Subversion browser. I find a project wiki useful for keeping track of "tips and tricks", build instructions, third party contact details and other transient information. I'm not sure Trac was the only (or even the best) choice here, but it did the job. My biggest gripe with it is probably that the wiki syntax is a bit awkward.
Naturally, our development process insisted on having tests for everything. We used Hudson to run the tests regularly and alert us of any regressions. Continuous Integration is hugely important. If you don't have it in your project, get it now. Hudson is easy to set up and flexible enough for all your CI needs.
The last thing to go on our development server was an Apache instance serving a static directory to which selected users had write access over scp. We used this as the release target for jarn.mkrelease when making internal releases of our own packages: The rule was that no production release could contain Subversion checkouts of packages. Instead, we made internal releases to this directory, which was listed in the find-links option in our buildout.
All of our environments were managed with Buildout, of course. To make the buildouts more re-usable and robust, we kept a number of buildout files in a directory called buildout.d. Each file was responsible for building or configuring one aspect of the system. For example, prod-nginx.cfg would configure nginx for the production server, and dev-beaker.cfg would configure Beaker (which we used for session management via collective.beaker) for the development environment. In addition, buildout.d/templates contained templates for configuration files, which were set up using collective.recipe.template.
At the top level, we had the following files:
packages.cfg, listing known good sets for packages we used, specifying checkout locations and packages for checkout by mr.developer (an indispensable development package management tool), and defining egg working sets for deployment and testing. The buildout.dumppickedversions extension was used to notify of unpinned dependencies.versions.cfg, containing our own known good set for our own released packages, as well as third party dependencies not covered by other known good set. This file was included from packages.cfg.One top level file for each environment. The default buildout.cfg was used for the development environment. The other files had names corresponding to servers, e.g. prod-app-master.cfg for the main application server and prod-web-1.cfg for the first of two web servers.
The top level "environment" files were only allowed to include extends lines to bring in the required components, and settings for host names, ports, users, etc. For example:
[buildout]
extends =
buildout.d/base.cfg
buildout.d/prod-base.cfg
packages.cfg
buildout.d/prod-lxml.cfg
buildout.d/postgres.cfg
buildout.d/postgres-relstorage.cfg
buildout.d/prod-beaker.cfg
buildout.d/prod-instance.cfg
# Hostnames to use for various services
[hosts]
public = www.example.org
master = server-1
slave = server-2
postgres = server-1
# Ports to use for various services
[ports]
instance1 = 8801
instance2 = 8802
instance3 = 8803
instance4 = 8804
postgres = 5432
# Users to run as
[users]
zope = nobody
postgres = nobody
Each file in buildout.d used the the same pattern. Here is an example to build HAProxy:
##############################################################################
# Production HAProxy - load balancer
##############################################################################
[buildout]
parts += haproxy-build haproxy-config
# Configuration
# *************
[hosts]
haproxy = localhost
[ports]
haproxy = 8200
[users]
haproxy = nobody
[downloads]
haproxy = http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.8.tar.gz
[haproxy-build]
target = generic
cpu = generic
# Recipes
# *******
[haproxy-build]
recipe = plone.recipe.haproxy
url = ${downloads:haproxy}
[haproxy-config]
recipe = collective.recipe.template
input = ${buildout:directory}/buildout.d/templates/haproxy.conf.in
output = ${buildout:directory}/etc/haproxy.conf
The settings under [hosts] and [ports] are expected to be overridden in the top-level buildout file.
In the development buildout, we installed a number of tools:
An "omelette" of all installed eggs for easier debugging, using collective.recipe.omelette.A test runner and coverage reporting tool.A script to help check for new versions of pinned packages.ZopeSkel, for making new packages.jarn.mkrelease, for making internal releases easily.
We also installed the following eggs into the main development Zope instance:
BPython, for a nice interactive shellplone.reload - absolutely indispensableProducts.PDBDebugMode, for instant debuggingProducts.PrintingMailHost, to help debug code that sends mail
Finally, we installed Sphinx, which we used to build documentation from reStructuredText files under source control in the docs directory in the build. This is probably the thing I'm most pleased about. We had a rule that no story could be completed without documentation being added to Sphinx. We then set up Hudson to automatically build and deploy the documentation after a successful build. The result is the best-documented project I've ever worked on. Design decisions, maintenance tasks, critical go-live activities and "how the hell did that work again" type documentation all found its way into the Sphinx documentation. Instead of leaving all the docs to the end, we had a continually expanding body of knowledge, and a process to ensure that it was not neglected during busy times of the project.
Each developer's machine ran Mac OS X with TextMate, Terminal and Firefox as the main "IDE". Firebug was of course installed. We used the Zope bundle for TextMate, which includes "pyflakes-on-save" functionally - a big time saver and code quality improver. We also used David Glick's mr.igor to help remember Python imports.
During deployment, we used FunkLoad for extensive load testing. At one point, we had two 8-way/32Gb machines generating load. To facilitate that, we wrote some scripts since released as BenchMaster. If you have never worked with FunkLoad or done proper load testing of your solutions, you're missing out. It's hugely important, and helped us identify a number of bottlenecks and optimisations that would almost certainly have brought the site to its knees in its first week after launch.
Production deployment
We used a number of technologies in the production deployment. Each deserves a blog post in its own right, but here is a quick run-down:
We had two identical, redundant servers running nginx and Varnish.nginx was used to accept SSL traffic, perform Zope virtual host monster URL rewriting, and force the user into and out of SSL as necessary. We also used nginx to add certain request headers used to optimise caching and load balancing, and to serve a "panic page" - a static HTML file to which HAProxy would redirect if no Zope backends were available.Varnish did what Varnish does: Make the site fast. We used the Varnish configuration bundled with plone.app.caching as a starting point, and tweaked it for our fairly unique load profile.Behind these servers, we had two application servers: One running HAProxy, Zope, Memcached and PostgreSQL, and one running additional Zope instances.HAProxy was configured to distribute load across the back-end Zope instances. It used headers set by nginx to route content authors, other logged-in users and anonymous users to appropriate back end Zope instances. We kept a pool of "shared" instances, with some instances ring-fenced for certain types of traffic. If no instances could be found, HAProxy would redirect the user to a "panic page" served directly by nginx.Zope was used to run Plone, obviously. In total, we had 16 Zope instances on each of the two 8-way back-end machines. These were configured to use RelStorage against a Postgres database. Additional relational database access was provided via SQLAlchemy. Session management and shared caching used Beaker (via collective.beaker), which was configured to store its data in Memached, allowing sessions to be non-sticky. Theming was provided by XDV, via collective.xdv (in our tests, we got better performance out of this than deploying the theme to a separate nginx instance). Cache control was provided by plone.app.caching. All custom content types were built using Dexterity. We used collective.tinymcetemplates for content templating, and collective.transmogrifier for migration from the previous site.Memcached was used by RelStorage and Beaker.Additionally, each build contained a specific Supervisord configuration to start and stop all relevant services with a single command.We configured a central Syslog server using rsyslog, collecting logs from all relevant services on all production servers. This was configured to insert log entries into a separate Postgres database. We created views for common log queries (e.g. "all errors in the last 24 hours"), and exposed these via phpPgAdmin, a simple, but effective solution for centralised log analysis.The logging server also acted as a Munin server, with each production server acting as nodes.
Overall, this project started to "feel right" pretty soon. The development environment and infrastructure held up very well, and was able to accommodate changes both in the requirements and our understanding of the problem domain. Some highlights for me were:
We managed to build documentation with the code, by incorporating Sphinx into our workflow.We avoided deploying code from Subversion by using internal releases. jarn.mkrelease was a big help here.Varnish is just such an awesome piece of software.And nginx is not much worse. :-)HAProxy could handle everything we threw at it, and then some. I will definitely use it again. For very simple scenarios, the built-in nginx or Varnish load balancers may suffice, but for complex setups like this, HAProxy is awesome.FunkLoad was a revelation. Not only did we find bottlenecks we wouldn't have found otherwise: thinking through the load test scenarios and the load test results helped us understand how our site would need to be built to perform acceptably under load.Plone 4 is a fantastic release. We started around beta 2, and it's been virtually flawless since, save for a few minor hiccups.XDV is clearly the future of theming - perhaps not just Plone theming, but theming in general. We ended up espousing several improvements that Laurence kindly put in for us. With the 0.4 release, I think it's reaching maturity. It quickly became an integral part of our workflow, and a favourite of one of our team members who's got considerable experience theming Plone and other CMS's.Having taught people Archetypes development in the past, I have no doubt that I prefer teaching Dexterity (and Grok-like views). It's quicker to learn, more intuitive and more consistent with "modern" Zope and Plone.
For me personally, this project was a very positive experience. If nothing else, it has taught me a lot of things I intend to put in the book I should be writing an update for right now... [Less]
|
Posted
about 14 years
ago
by
Plone, web standards and e-commerce - blog
Most secure open source CMS
Plone has proven to be secure before and updated statistics from the CVE database show that Plone is still the most secure CMS. Bellow we see the last five years of statistics and I let them speak for themselves.
... [More]
CMS vulnerabilities
CMS
2006
2007
2008
2009
2010
Drupal
39
37
107
126
44
Joomla
72
66
66
76
95
Plone
3
1
6
1
1
Wordpress
18
63
66
27
13
Plone vulnerability process
Plone is secure but still it has not managed to have zero security issues one year and probably nobody will manage that but there is a big difference on how to handle the found vulnerabilities. Today was the big patch Tuesday or ploneaggedon like some called it on twitter. Even with vulnerability like this Plone shows that is is mature and has a process for handling them. The security team announced the upcoming patch one week in advanced before the patch and details were made public. This gave all the Plone companies possibility to plan upgrade and downtime for their customers. Great work security team and sweet dreams.
[Less]
|
Posted
about 14 years
ago
by
Weblog
See these pages for info on why you may want this:
http://plone.org/products/plone/security/advisories/cve-2011-0720 and
http://plone.org/documentation/kb/disable-logins-for-a-plone-site
If you want to change lots of nginx config files to temporarily
... [More]
switch off login (authentication) and cookies, you can use this bash script at your own risk:
#! /bin/bash
# Note: /bin/sh would be better, but at least when that points to
# /bin/dash it complains about some of my usage of 'test'.
cat < $CONFFILE
fi
fi
fi
done
echo "Do not forget to reload or restart nginx after changes."
[Less]
|
Posted
about 14 years
ago
by
Zope.org Product Updates
|
Posted
about 14 years
ago
by
Zope.org Product Updates
|