ZODB, the
Zope Object Database, is a full-featured and
Python-specific object-oriented database (OODB) system. ZODB can be
thought of as a more powerful alternative to Python’s shelves of the
preceding section. It allows you to store nearly arbitrary Python objects
persistently by key, like shelves, but it adds a set of additional
features in exchange for a small amount of extra interface code.
ZODB is not the only OODB available for Python: the Durus system
is generally seen as a simpler OODB which was inspired by
ZODB. While Durus offers some advantages, it does not provide all the
features of ZODB today, and it has not been as widely deployed (though
perhaps in part because it is newer). Because of that, this section
focuses on ZODB to introduce OODB concepts in general.
ZODB is an open source, third-party add-on for Python. It was
originally developed as the database mechanism for websites developed with
the Zope web framework mentioned in
Chapter 12
,
but it is now available as a standalone package. It’s useful outside the
context of both Zope and the Web as a general database management system
in any domain.
Although ZODB does not support SQL queries, objects stored in ZODB
can leverage the full power of the Python language. Moreover, in some
applications, stored data is more naturally represented as a structured
Python object. Table-based relational systems often must represent such
data as individual parts scattered across multiple tables and associate
them with complex and potentially slow key-based joins, or otherwise map
them to and from the Python class model. Because OODBs store native Python
objects directly, they can often provide a simpler model in systems which
do not require the full power of SQL.
Using a ZODB database is very similar to Python’s standard library
shelves, described in the prior section. Just like shelves, ZODB uses the
Python pickling system to implement a persistent dictionary of persistent
Python objects. In fact, there is almost no database interface to be
found—objects are made persistent simply by assigning them to keys of the
root ZODB dictionary object, or embedding them in objects stored in the
database root. And as in a shelve, “records” take the form of native
Python objects, processed with normal Python syntax and tools.
Unlike shelves, though, ZODB adds features critical to some types of
programs:
You don’t need to manually lock files to avoid data corruption
if there are potentially many concurrent writers, the way you would
for shelves.
If your program crashes, your changes are not retained unless
you explicitly commit them to the database.
Objects in ZODB derived from a persistence superclass are
smart enough to know the database must be updated when an attribute
is assigned.
Objects are cached in memory for efficiency and are
automatically removed from the cache when they haven’t been
used.
Because ZODB stores your database in a single flat file with
large-file support, it is immune to the potential size constraints
and DBM filesystem format differences of shelves. As we saw earlier
in this chapter, a shelve created on Windows usingbsddb
may not be accessible to a script
running withgdbm
on
Linux.
Because of such advantages, ZODB is probably worth your attention if
you need to store Python objects in a database persistently in a
production environment. The only significant price you’ll pay for using
ZODB is a small amount of extra code:
Accessing the database requires a small amount of extra
boilerplate code to interface with ZODB—it’s not a simple open
call.
Classes are derived from a persistence superclass if you want
them to take advantage of automatic updates on changes—persistent
classes are generally not as
completely
independent of the database
as in shelves, though they can be.
Considering the extra functionality ZODB provides beyond shelves,
these trade-offs are usually more than justified for
many applications.
Unfortunately, as I write
this edition in June 2010, ZODB is not yet available for
Python 3.X, the version used in this book. Because of that, the prior
edition’s Python 2.X examples and material have been removed from this
section. However, in deference to Python 2.X users, as well as 3.X
readers of some bright future where a 3.X-base ZODB has materialized,
I’ve made the prior edition’s ZODB materials and examples available in
this edition’s examples package.
See the
Preface
for details on the examples
package, and see these locations within it for more on ZODB:
C:\...\Dbase\Zodb-2.x # ZODB examples code third edition
C:\...\Dbase\Zodb-2.x\Documentaion # The 3rd Edition's ZODB tutorial
Although I cannot predict the future, ZODB will likely become
available for Python 3.X eventually. In the absence of this, other
Python-based OODBs may offer additional 3.X options.
To give you a brief sample of ZODB’s flavor, though, here’s a
quick spin through its operation in Python 2.X. Once we’ve installed a
compatible ZODB, we begin by first creating a database:
...\PP4E\Dbase\Zodb-2.x>python
>>>from ZODB import FileStorage, DB
>>>storage = FileStorage.FileStorage(r'C:\temp\mydb.fs')
>>>db = DB(storage)
>>>connection = db.open()
>>>root = connection.root()
This is mostly standard “boilerplate” code for connecting to a
ZODB database: we import its tools, create aFileStorage
and aDB
from it, and then open the database and
create the
root object
. The root object is the
persistent dictionary in which objects are stored.FileStorage
is an object that maps the
database to a flat file. Other storage interface options, such as
relational database-based storage, are also possible.
Adding objects to a ZODB database is as simple as in shelves.
Almost any Python object will do, including tuples, lists, dictionaries,
class instances, and nested combinations thereof. As forshelve
, simply assign your objects to a key in
the database root object to make them persistent:
>>>object1 = (1, 'spam', 4, 'YOU')
>>>object2 = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>>object3 = {'name': ['Bob', 'Doe'],
'age': 42,
'job': ('dev', 'mgr')}
>>>root['mystr'] = 'spam' * 3
>>>root['mytuple'] = object1
>>>root['mylist'] = object2
>>>root['mydict'] = object3
>>>root['mylist']
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
Because ZODB supports transaction rollbacks, you must commit your
changes to the database to make them permanent. Ultimately, this
transfers the pickled representation of your objects to the underlying
file storage medium—here, three files that include the name of the file
we gave when opening:
>>>import transaction
>>>transaction.commit()
>>>storage.close()
...\PP4E\Dbase\Zodb-2.x>dir /B c:\temp\mydb*
mydb.fs
mydb.fs.index
mydb.fs.tmp
Without the final commit in this session, none of the changes we
made would be saved. This is what we want in general—if a program aborts
in the middle of an update task, none of the partially complete work it
has done is retained. In fact, ZODB supports general database undo
operations.
Pulling persistent objects back from ZODB in another session or
program is just as straightforward: reopen the database as before and
index the root to fetch objects back into memory. Like shelves, the
database root supports dictionary interfaces—it may be indexed, has
dictionary methods and a length, and so on:
...\PP4E\Dbase\Zodb-2.x>python
>>>from ZODB import FileStorage, DB
>>>storage = FileStorage.FileStorage(r'C:\temp\mydb.fs')
>>>db = DB(storage)
>>>connection = db.open()
>>>root = connection.root()
# connect
>>>len(root), root.keys()
# size, index
(4 ['mylist', 'mystr', 'mytuple', 'mydict'])
>>>root['mylist']
# fetch objects
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>>root['mydict']
{'job': ('dev', 'mgr'), 'age': 42, 'name': ['Bob', 'Doe']}
>>>root['mydict']['name'][-1]
# Bob's last name
'Doe'
Because the database root looks just like a dictionary, we can
process it with normal dictionary code—stepping through the keys list to
scan record by record, for instance:
>>>for key in root.keys():
print('%s => %s' % (key.ljust(10), root[key]))
mylist => [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
mystr => spamspamspam
mytuple => (1, 'spam', 4, 'YOU')
mydict => {'job': ('dev', 'mgr'), 'age': 42, 'name': ['Bob', 'Doe']}
Also like pickling and shelves, ZODB supports storage and
retrieval of class instance objects, though they must inherit from a
superclass which provides required protocol and intercepts attribute
changes in order to flush them to disk automatically:
from persistent import Persistent
class Person(Persistent):
def __init__(self, name, job=None, rate=0):
self.name = name
self.job = job
self.rate = rate
def changeRate(self, newrate):
self.rate = newrate # automatically updates database
When changing ZODB persistent class instances, in-memory attribute
changes are
automatically
written
back to the database. Other types of changes, such as in-place
appends
and key assignments, still require
reassignment to the original key as in shelves to force the change to be
written to disk (built-in list and dictionary objects do not know that
they are persistent).
Because ZODB does not yet work with Python 3.X, that’s as much as
we can say about it in this book. For more details, search for ZODB and
Zope resources on the Web, and see the examples package resources listed
earlier. Here, let’s move on to see how Python programs can make use of
a very different sort of database interface—relational databases and
SQL.
Theshelve
module and ZODB package of the prior sections are powerful
tools. Both allow scripts to throw nearly arbitrary Python objects on a
keyed-access file and load them back later—in a single step for shelves
and with a small amount of administrative code for ZODB. Especially for
applications that record highly structured data, object databases can be
convenient and efficient—there is no need to split and later join together
the parts of large objects, and stored data is processed with normal
Python syntax because it is normal Python objects.
Shelves and ZODB aren’t relational database systems, though; objects
(records) are accessed with a single key, and there is no notion of SQL
queries. Shelves, for instance, are essentially databases with a single
index and no other query-processing support. Although it’s possible to
build a multiple-index interface to store data with multiple shelves, it’s
not a trivial task and requires manually coded extensions.
ZODB supports some types of searching beyond shelve (e.g., its
cataloging feature), and persistent objects may be traversed with all the
power of the Python language. However, neither shelves nor ZODB
object-oriented databases provide the full generality of SQL queries.
Moreover, especially for data that has a naturally tabular structure,
relational databases may sometimes be a better fit.
For programs that can benefit from the power of SQL, Python also
broadly supports relational database management systems (RDBMSs).
Relational databases are not necessarily mutually exclusive with the
object persistence topics we studied earlier in this chapter—it is
possible, for example, to store the serialized string representation of a
Python object produced by pickling in a relational database. ZODB also
supports the notion of mapping an object database to a relational storage
medium.
The databases we’ll meet in this section, though, are structured and
processed in very different ways:
They store data in related tables of columns (rather than in
persistent dictionaries of arbitrarily structured persistent Python
objects).
They support the SQL query language for accessing data and
exploiting relationships among it (instead of Python object
traversals).
For some applications, the end result can be a potent combination.
Moreover, some SQL-based database systems provide industrial-strength
persistence support for
enterprise
-level data.
Today, there are freely available interfaces that let Python scripts
utilize all common relational database systems, both free and
commercial
: MySQL, Oracle, Sybase, Informix, InterBase, PostgreSQL
(Postgres), SQLite, ODBC, and more. In addition, the Python community has
defined a
database API
specification that works
portably with a variety of underlying database packages. Scripts written
for this API can be migrated to different database vendor packages, with
minimal or no source code changes.
As of Python 2.5, Python itself includes built-in support for the
SQLite relational database system as part of its standard library. Because
this system supports the portable database API, it serves as a tool for
both program storage and prototyping—systems developed with SQLite work
largely unchanged when a more feature-rich database such as MySQL or
Oracle is deployed.
Moreover, the popular
SQLObject and SQLAlchemy third-party systems both provide
an Object Relational Mapper (ORM), which grafts an object interface onto
your database, in which tables are modeled by as Python classes, rows by
instances of those classes, and columns by instance attributes. Since ORMs
largely just wrap SQL databases in Python classes, we’ll defer their
coverage until later in this chapter; for now, let’s explore SQL basics in
Python.
Like ZODB, and
unlike thepickle
andshelve
persistence modules presented
earlier, most SQL databases are optional extensions that are not part of
Python itself. SQLite is the only relational database package that comes
with Python. Moreover, you need to know SQL to fully understand their
interfaces. Because we don’t have space to teach SQL in this text, this
section gives a brief overview of the API; please consult other SQL
references and the database API resources mentioned in the next section
for more details that we’ll skip here.
The good news is that you can access SQL databases from Python,
through a straightforward and portable model. The Python database API
specification defines an interface for communicating with underlying
database systems from Python scripts. Vendor-specific database
interfaces for Python may or may not conform to this API completely, but
all database extensions for Python in common use are minor variations on
a theme. Under the database API, SQL databases in Python are grounded on
three core
concepts
:
Represent a connection to a database, are the interface to
rollback and commit operations, provide package implementation
details, and generate cursor objects.
Represent an SQL statement submitted as a string and can be
used to access and step through SQL statement results.
select
statementsAre returned to scripts as Python sequences of sequences
(e.g., a list of tuples), representing database tables of rows.
Within these row sequences, column field values are normal Python
objects such as strings, integers, and floats (e.g.,[('bob',48), ('emily',47)]
). Column
values may also be special types that encapsulate things such as
date and time, and database NULL values are returned as the PythonNone
object.
Beyond this, the API defines a standard set of database exception
types, special database type object constructors, and informational
top-level calls including thread safety and replacement style
checks.
For instance, to establish a database connection under the Python
API-compliant Oracle interface, install the commonly used Python Oracle
extension module as well as Oracle itself, and then run a statement of
this form:
connobj = connect("user/[email protected]")
This call’s arguments may vary per database and vendor (e.g., some
may require network details or a local file’s name), but they generally
contain what you provide to log in to your database system. Once you
have a connection object, there a variety of things you can do with it,
including:
connobj.close()
close connection now (not at object __del__ time)
connobj.commit()
commit any pending transactions to the database
connobj.rollback()
roll database back to start of pending transactions
But one of the most useful things to do with a connection object
is to generate a cursor object:
cursobj = connobj.cursor()
return a new cursor object for running SQL
Cursor objects have a set of methods, too (e.g.,close
to close the cursor before its
destructor runs, andcallproc
to call
a stored procedure), but the most important may be this one:
cursobj.execute(sqlstring
[,parameters
])
run SQL query or command string
Parameters are passed in as a sequence or mapping of values, and
are substituted into the SQL statement string according to the interface
module’s replacement target conventions. Theexecute
method can be used to run a variety of
SQL statement strings:
DDL definition statements (e.g.,CREATE TABLE
)
DML modification statements (e.g.,UPDATE
orINSERT
)
DQL query statements (e.g.,SELECT
)
After running an SQL statement, the cursor’srowcount
attribute gives the number of rows
changed (for DML changes) or fetched (for DQL queries), and the cursor’sdescription
attribute gives column
names and types after a query;execute
also returns the number of rows
affected or fetched in the most vendor interfaces. For DQL query
statements, you must call one of thefetch
methods to complete the
operation:
tuple = cursobj.fetchone()
fetch next row of a query result
listoftuple = cursobj.fetchmany([size
])
fetch next set of rows of query result
listoftuple = cursobj.fetchall()
fetch all remaining rows of the result
And once you’ve received fetch method results, table information
is processed using normal Python sequence operations; for example, you
can step through the tuples in afetchall
result list with a simplefor
loop or comprehension expression. Most
Python database interfaces also allow you to provide values to be passed
to SQL statement strings, by providing targets and a tuple of
parameters. For instance:
query = 'SELECT name, shoesize FROM spam WHERE job = ? AND age = ?'
cursobj.execute(query, (value1, value2))
results = cursobj.fetchall()
for row in results:...
In this event, the database interface utilizes prepared statements
(an optimization and convenience) and correctly passes the parameters to
the database regardless of their Python types. The notation used to code
targets in the query string may vary in some database interfaces (e.g.,:p1
and:p2
or two%s
, rather than the two?
s used by the Oracle interface); in any
event, this is not the same as Python’s%
string formatting operator, as it sidesteps
security issues along the way.
Finally, if your database supports stored procedures, you can call
them with thecallproc
method or by
passing an SQLCALL
orEXEC
statement string to theexecute
method.callproc
may generate a result table retrieved
with afetch
variant, and returns a
modified copy of the input sequence—input parameters are left untouched,
and output and input/output parameters are replaced with possibly new
values. Additional API features, including support for database blobs
(roughly, with sized results), is described in the API’s documentation.
For now, let’s move on to do some real SQL processing in
Python.
We don’t have
space to provide an exhaustive reference for the database
API in this book. To sample the flavor of the interface, though, let’s
step through a few simple examples. We’ll use the SQLite database system
for this tutorial. SQLite is a standard part of Python itself, which you
can reasonably expect to be available in all Python installations.
Although SQLite implements a complete relational database system, it
takes the form of an in-process library instead of a server. This
generally makes it better suited for program storage than for
enterprise-level data needs.
Thanks to Python’s portable database API, though, other popular
database packages such as PostgreSQL, MySQL, and Oracle are used almost
identically; the initial call to log in to the database will be all that
normally requires different argument values for scripts that use
standard SQL code. Because of this, we can use the SQLite system both as
a prototyping tool in applications development and as an easy way to get
started with the Python SQL database API in this book.
As mentioned earlier, the third edition’s coverage of MySQL had
to be replaced here because the interface used is not yet ported to
Python 3.X. However, the third edition’s MySQL-based examples and
overview are available in the book examples package, in directory
C:\...\PP4E\Dbase\Sql\MySql-2.X
,
and its
Documentation
subdirectory. The examples are in Python 2.X form, but their
database-related code is largely version neutral. Since that code is
also largely database neutral, it is probably of limited value to most
readers; the scripts listed in this book should work on other database
packages like MySQL with only trivial changes.
Regardless of which
database system your scripts talk to, the basic SQL
interface in Python is very simple. In fact, it’s hardly
object-oriented at all—queries and other database commands are sent as
strings of SQL. If you know SQL, you already have most of what you
need to use relational databases in Python. That’s good news if you
fall into this category, but adds a prerequisite if you don’t.
This isn’t a book on the SQL language, so we’ll defer to other
resources for details on the commands we’ll be running here (O’Reilly
has a suite of books on the topic). In fact, the databases we’ll use
are tiny, and the commands we’ll use are deliberately simple as SQL
goes—you’ll want to extrapolate from what you see here to the more
realistic tasks you face. This section is just a brief look at how to
use the Python language in conjunction with an SQL database.
Whether large or small, though, the Python code needed to
process your database turns out to be surprisingly straightforward. To
get started, the first thing we need to do is open a connection to the
database and create a table for storing records:
C:\...\PP4E\Dbase\Sql>python
>>>import sqlite3
>>>conn = sqlite3.connect('dbase1')
# use a full path for files elsewhere
We start out by importing the Python SQLite interface here—it’s
a standard library module calledsqlite3
to our scripts. Next we create a
connection object, passing in the items our database requires at
start-up time—here, the name of the local file where our databases
will be stored. This file is what you’ll want to back up to save your
database. It will create the file if needed, or open its current
content; SQLite also accepts that special string “:memory:” to create
a temporary database in memory instead.
As long as a script sticks to using standard SQL code, theconnect
call’s arguments are
usually the only thing that can vary across different database
systems. For example, in the MySQL interface this call accepts a
network host’s domain name, user name, and password, passed as keyword
arguments instead, and the Oracle example sketched earlier expects a
more specific sting syntax. Once we’ve gotten past this
platform-
specific
call, though,
the rest of the API is largely database neutral.
Next, let’s make a
cursor for submitting SQL statements to the database
server, and submit one to create a first table:
>>>curs = conn.cursor()
>>>>
>>>tblcmd = 'create table people (name char(30), job char(10), pay int(4))'
>>>curs.execute(tblcmd)
The last command here creates the table called “people” within
the database; the name, job, and pay information specifies the columns
in this table, as well as their datatypes, using a “type(size)”
syntax—two strings and an integer. Datatypes can be more sophisticated
than ours, but we’ll ignore such details here (see SQL references). In
SQLite, the file is the database, so there’s no notion of creating or
using a specific database within it, as there is in some systems. At
this point, there is a simple flat file in the current working
directory named
data1
, which
contains binary data and contains ourpeople
database table.
So far, we’ve
logged in (which just means opening a local file in
SQLite) and created a table. Next let’s start a new Python session and
create some records. There are three basic statement-based approaches
we can use here: inserting one row at a time or inserting multiple
rows with a single call statement or a Python loop. Here is the simple
case (I’m omitting some call return values here if they are irrelevant
to the story):
C:\...\PP4E\Dbase\Sql>python
>>>import sqlite3
>>>conn = sqlite3.connect('dbase1')
>>>curs = conn.cursor()
>>>curs.execute('insert into people values (?, ?, ?)', ('Bob', 'dev', 5000))
>>>curs.rowcount
1
>>>sqlite3.paramstyle
'qmark'
Create a cursor object to submit SQL statements to the database
server as before. The SQLinsert
command adds a single row to the table. After an execute
call, the cursor’srowcount
attribute gives the number of rows produced or affected by the last
statement run. This is also available as the return value of anexecute
call in some database
interface modules, but this is not defined in the database API
specification, and isn’t
provided
in SQLite; in other words,
don’t depend on it if you want your database scripts to work on other
database systems.
Parameters to substitute into the SQL statement string are
generally passed in as a sequence (e.g., list or tuple). Notice the
module’sparamstyle
—this tells us
what style it uses for substitution targets in the statement string.
Here,qmark
means this module
accepts?
for replacement targets.
Other database modules might use styles such asformat
(meaning a%s
target), or numeric indexes or mapping
keys; see the DB API for more details.
To insert multiple rows with a single statement, use theexecutemany
method and a sequence
of row sequences (e.g., a list of lists). This call is like callingexecute
once for each row sequence
in the argument, and in fact may be implemented as such; database
interfaces may also use database-specific techniques to make this run
quicker, though:
>>>curs.executemany('insert into people values (?, ?, ?)',
...[ ('Sue', 'mus', '70000'),
...('Ann', 'mus', '60000')])
>>>curs.rowcount
2
We inserted two rows at once in the last statement. It’s hardly
any more work to achieve the same result by inserting one row at a
time with a Python loop:
>>>rows = [['Tom', 'mgr', 100000],
...['Kim', 'adm', 30000],
...['pat', 'dev', 90000]]
>>>for row in rows:
...curs.execute('insert into people values (? , ?, ?)', row)
...
>>>conn.commit()
Blending Python and SQL like this starts to open up all sorts of
interesting possibilities. Notice the last command; we always need to
call the connection’scommit
method
to write our changes out to the database. Otherwise, when the
connection is closed, our changes may be lost. In fact, until we call
thecommit
method, none of our
inserts may be visible from other database connections.
Technically, the API suggests that a connection object should
automatically call itsrollback
method to back out changes that have not yet been committed, when it
is closed (which happens manually when itsclose
method is called, or automatically
when the connection object is about to be garbage collected). For
database systems that don’t support transaction commit and rollback
operations, these calls may do nothing. SQLite implements both thecommit
androllback
methods; the latter rolls back any
changes made since the lastcommit
.
OK, we’ve now
added six records to our database table. Let’s run an
SQL query to see how we did:
>>>curs.execute('select * from people')
>>>curs.fetchall()
[('Bob', 'dev', 5000), ('Sue', 'mus', 70000), ('Ann', 'mus', 60000), ('Tom', 'mgr',
100000), ('Kim', 'adm', 30000), ('pat', 'dev', 90000)]
Run an SQLselect
statement
with a cursor object to grab all rows and call the cursor’sfetchall
to retrieve them. They come back to
our script as a sequence of sequences. In this module, it’s a list of
tuples—the outer list represents the result table, the nested tuples
are that table’s rows, and the nested tuple’s contents are the column
data. Because it’s all Python data, once we get the query result, we
process it with normal Python code. For example, to make the display a
bit more coherent, loop through the query’s result as usual:
>>>curs.execute('select * from people')
>>>for row in curs.fetchall():
...print(row)
...
('Bob', 'dev', 5000)
('Sue', 'mus', 70000)
('Ann', 'mus', 60000)
('Tom', 'mgr', 100000)
('Kim', 'adm', 30000)
('pat', 'dev', 90000)
Tuple unpacking comes in handy in loops here, too, to pick out
column values as we go. Here’s a simple formatted display of two of
the columns’ values:
>>>curs.execute('select * from people')
>>>for (name, job, pay) in curs.fetchall():
...print(name, ':', pay)
...
Bob : 5000
Sue : 70000
Ann : 60000
Tom : 100000
Kim : 30000
pat : 90000
Because the query result is a sequence, we can use Python’s
powerful sequence and iteration tools to process it. For instance, to
select just the name column values, we can run a more specific SQL
query and get a list of tuples:
>>>curs.execute('select name from people')
>>>names = curs.fetchall()
>>>names
[('Bob',), ('Sue',), ('Ann',), ('Tom',), ('Kim',), ('pat',)]
Or we can use a Python list comprehension to pick out the fields
we want—by using Python code, we have more control over the data’s
content and format:
>>>curs.execute('select * from people')
>>>names = [rec[0] for rec in curs.fetchall()]
>>>names
['Bob', 'Sue', 'Ann', 'Tom', 'Kim', 'pat']
Thefetchall
call we’ve used
so far fetches the entire query result table all at once, as a single
sequence (an empty sequence comes back, if the result is empty).
That’s convenient, but it may be slow enough to block the caller
temporarily for large result tables or generate substantial network
traffic if the server is running remotely (something could easily
require a parallel thread in GUI). To avoid such a bottleneck, we can
also grab just one row, or a bunch of rows, at a time withfetchone
andfetchmany
. Thefetchone
call returns the next result row or
aNone
false value at the end of
the table:
>>>curs.execute('select * from people')
>>>while True:
...row = curs.fetchone()
...if not row: break
...print(row)
...
('Bob', 'dev', 5000)
('Sue', 'mus', 70000)
('Ann', 'mus', 60000)
('Tom', 'mgr', 100000)
('Kim', 'adm', 30000)
('pat', 'dev', 90000)
Thefetchmany
call returns a
sequence of rows from the result, but not the entire table; you can
specify how many rows to grab each time with a parameter or rely on
the default as given by the cursor’sarraysize
attribute. Each call gets at most
that many more rows from the result or an empty sequence at the end of
the table:
>>>curs.execute('select * from people')
>>>while True:
...rows = curs.fetchmany()
# size=N optional argument
...if not rows: break
...for row in rows:
...print(row)
...
('Bob', 'dev', 5000)
('Sue', 'mus', 70000)
('Ann', 'mus', 60000)
('Tom', 'mgr', 100000)
('Kim', 'adm', 30000)
('pat', 'dev', 90000)
For this module at least, the result table is exhausted after afetchone
orfetchmany
returns aFalse
value. The DB API says thatfetchall
returns “all (remaining) rows,” so
you generally need to callexecute
again to regenerate results before fetching
new data
:
>>>curs.fetchone()
>>>curs.fetchmany()
[]
>>>curs.fetchall()
[]
Naturally, we can do more than fetch an entire table; the full
power of the SQL language is at your disposal in Python:
>>>curs.execute('select name, job from people where pay > 60000')
>>>curs.fetchall()
[('Sue', 'mus'), ('Tom', 'mgr'), ('pat', 'dev')]
The last query fetches name and job fields for people who earn
more than $60,000. The next is similar, but passes in the selection
value as a parameter and orders the
result table:
>>>query = 'select name, job from people where pay >= ? order by name'
>>>curs.execute(query, [60000])
>>>for row in curs.fetchall(): print(row)
...
('Ann', 'mus')
('Sue', 'mus')
('Tom', 'mgr')
('pat', 'dev')