


Too complete! A collection of tutorials on using Python to operate MySQL!
1. Introduction to python operating database
Python’s standard database interface is Python DB-API, which provides developers with a database application programming interface. . The Python database interface supports many databases. You can choose the database that suits your project:
- GadFly
- mSQL
- MySQL
- PostgreSQL
- Microsoft SQL Server 2000
- Informix
- Interbase
- Oracle
- Sybase…
You can visit the Python database interface and API to view a detailed list of supported databases.
You need to download different DB API modules for different databases. For example, if you need to access Oracle database and Mysql data, you need to download Oracle and MySQL database modules.
DB-API is a specification. It defines a series of necessary objects and database access methods to provide consistent access to a variety of underlying database systems and a variety of database interface programs. access interface.
Python's DB-API implements interfaces for most databases. After using it to connect to each database, you can operate each database in the same way.
Python DB-API usage process:
- Introduce the API module.
- Get the connection to the database.
- Execute SQL statements and stored procedures.
- Close the database connection.
2. Python operates MySQL module
Python mainly uses two methods to operate MySQL:
DB module (native SQL )
- PyMySQL (supports python2.x/3.x)
- MySQLdb (currently only supports python2.x)
ORM Framework
- SQLAchemy
2.1 PyMySQL module
This article mainly introduces the PyMySQL module, MySQLdb is used in a similar way
2.1.1 Install PyMySQL
PyMySQL is a MySQL driver written in Python, which allows us to operate the MySQL database using the Python language.
pip install PyMySQL
2.2 Basic usage
#! /usr/bin/env python # -*- coding: utf-8 -*- # __author__ = "shuke" # Date: 2018/5/13 import pymysql # 创建连接 conn = pymysql.connect(host="127.0.0.1", port=3306, user='zff', passwd='zff123', db='zff', charset='utf8mb4') # 创建游标(查询数据返回为元组格式) # cursor = conn.cursor() # 创建游标(查询数据返回为字典格式) cursor = conn.cursor(pymysql.cursors.DictCursor) # 1. 执行SQL,返回受影响的行数 effect_row1 = cursor.execute("select * from USER") # 2. 执行SQL,返回受影响的行数,一次插入多行数据 effect_row2 = cursor.executemany("insert into USER (NAME) values(%s)", [("jack"), ("boom"), ("lucy")])# 3 # 查询所有数据,返回数据为元组格式 result = cursor.fetchall() # 增/删/改均需要进行commit提交,进行保存 conn.commit() # 关闭游标 cursor.close() # 关闭连接 conn.close() print(result) """ [{'id': 6, 'name': 'boom'}, {'id': 5, 'name': 'jack'}, {'id': 7, 'name': 'lucy'}, {'id': 4, 'name': 'tome'}, {'id': 3, 'name': 'zff'}, {'id': 1, 'name': 'zhaofengfeng'}, {'id': 2, 'name': 'zhaofengfeng02'}] """
2.3 Get the latest created data with auto-increment ID
#! /usr/bin/env python # -*- coding: utf-8 -*- # __author__ = "shuke" # Date: 2018/5/13 import pymysql # 创建连接 conn = pymysql.connect(host="127.0.0.1", port=3306, user='zff', passwd='zff123', db='zff', charset='utf8mb4') # 创建游标(查询数据返回为元组格式) cursor = conn.cursor() # 获取新创建数据自增ID effect_row = cursor.executemany("insert into USER (NAME)values(%s)", [("eric")]) # 增删改均需要进行commit提交 conn.commit() # 关闭游标 cursor.close() # 关闭连接 conn.close() new_id = cursor.lastrowid print(new_id) """ 8 """
2.4 Query operation
#! /usr/bin/env python # -*- coding: utf-8 -*- # __author__ = "shuke" # Date: 2018/5/13 import pymysql # 创建连接 conn = pymysql.connect(host="127.0.0.1", port=3306, user='zff', passwd='zff123', db='zff', charset='utf8mb4') # 创建游标 cursor = conn.cursor() cursor.execute("select * from USER") # 获取第一行数据 row_1 = cursor.fetchone() # 获取前n行数据 row_2 = cursor.fetchmany(3) # # # 获取所有数据 row_3 = cursor.fetchall() # 关闭游标 cursor.close() # 关闭连接 conn.close() print(row_1) print(row_2) print(row_3)
⚠️ When fetching data, proceed in order. You can use cursor.scroll(num,mode) to move the cursor position, such as:
- cursor.scroll(1,mode ='relative') #Move relative to the current position
- cursor.scroll(2,mode='absolute') #Move relative to the absolute position
2.5 Prevent SQL injection
#! /usr/bin/env python # -*- coding: utf-8 -*- # __author__ = "shuke" # Date: 2018/5/13 import pymysql # 创建连接 conn = pymysql.connect(host="127.0.0.1", port=3306, user='zff', passwd='zff123', db='zff', charset='utf8mb4') # 创建游标 cursor = conn.cursor() # 存在sql注入情况(不要用格式化字符串的方式拼接SQL) sql = "insert into USER (NAME) values('%s')" % ('zhangsan',) effect_row = cursor.execute(sql) # 正确方式一 # execute函数接受一个元组/列表作为SQL参数,元素个数只能有1个 sql = "insert into USER (NAME) values(%s)" effect_row1 = cursor.execute(sql, ['wang6']) effect_row2 = cursor.execute(sql, ('wang7',)) # 正确方式二 sql = "insert into USER (NAME) values(%(name)s)" effect_row1 = cursor.execute(sql, {'name': 'wudalang'}) # 写入插入多行数据 effect_row2 = cursor.executemany("insert into USER (NAME) values(%s)", [('ermazi'), ('dianxiaoer')]) # 提交 conn.commit() # 关闭游标 cursor.close() # 关闭连接 conn.close()
In this way, SQL operations are safer. If you need more detailed documentation, please refer to the PyMySQL documentation. However, it seems that the implementation of these SQL databases is not the same. The parameter placeholder of PyMySQL uses the C formatter such as %s, while the placeholder of the sqlite3 module that comes with Python seems to be a question mark (?). Therefore, it is better to read the documentation carefully when using other databases. Welcome to PyMySQL's documentation
3. Database connection pool
There is a problem with the above method. It can be satisfied in a single-threaded situation. The program needs to frequently create and release connections. After completing the operation on the database, what problems will our program/script cause in multi-threading? At this time, we need to use the database connection pool to solve this problem!
3.1 DBUtils Module
DBUtils is a module in Python used to implement database connection pooling.
This connection pool has two connection modes:
- Create a connection for each thread. Even if the thread calls the close method, it will not be closed. Just put the connection back into the connection pool for use by your own thread again. When the thread terminates, the connection will be automatically closed
- Create a batch of connections to the connection pool for shared use by all threads (recommended)
3.2 Mode 1
#! /usr/bin/env python # -*- coding: utf-8 -*- # __author__ = "shuke" # Date: 2018/5/13 from DBUtils.PersistentDB import PersistentDB import pymysql POOL = PersistentDB( creator=pymysql,# 使用链接数据库的模块 maxusage=None,# 一个链接最多被重复使用的次数,None表示无限制 setsession=[],# 开始会话前执行的命令列表。如:["set datestyle to ...", "set time zone ..."] ping=0, # ping MySQL服务端,检查是否服务可用。# 如:0 = None = never, 1 = default = whenever it is requested, 2 = when a cursor is created, 4 = when a query is executed, 7 = always closeable=False, # 如果为False时, conn.close() 实际上被忽略,供下次使用,再线程关闭时,才会自动关闭链接。如果为True时, conn.close()则关闭链接,那么再次调用pool.connection时就会报错,因为已经真的关闭了连接(pool.steady_connection()可以获取一个新的链接) threadlocal=None,# 本线程独享值得对象,用于保存链接对象,如果链接对象被重置 host='127.0.0.1', port=3306, user='zff', password='zff123', database='zff', charset='utf8', ) def func(): conn = POOL.connection(shareable=False) cursor = conn.cursor() cursor.execute('select * from USER') result = cursor.fetchall() cursor.close() conn.close() return result result = func() print(result)
3.2 Mode 2
#! /usr/bin/env python # -*- coding: utf-8 -*- # __author__ = "shuke" # Date: 2018/5/13 import time import pymysql import threading from DBUtils.PooledDB import PooledDB, SharedDBConnection POOL = PooledDB( creator=pymysql,# 使用链接数据库的模块 maxconnections=6,# 连接池允许的最大连接数,0和None表示不限制连接数 mincached=2,# 初始化时,链接池中至少创建的空闲的链接,0表示不创建 maxcached=5,# 链接池中最多闲置的链接,0和None不限制 maxshared=3, # 链接池中最多共享的链接数量,0和None表示全部共享。PS: 无用,因为pymysql和MySQLdb等模块的 threadsafety都为1,所有值无论设置为多少,_maxcached永远为0,所以永远是所有链接都共享。 blocking=True,# 连接池中如果没有可用连接后,是否阻塞等待。True,等待;False,不等待然后报错 maxusage=None,# 一个链接最多被重复使用的次数,None表示无限制 setsession=[],# 开始会话前执行的命令列表。如:["set datestyle to ...", "set time zone ..."] ping=0, # ping MySQL服务端,检查是否服务可用。# 如:0 = None = never, 1 = default = whenever it is requested, 2 = when a cursor is created, 4 = when a query is executed, 7 = always host='127.0.0.1', port=3306, user='zff', password='zff123', database='zff', charset='utf8' ) def func(): # 检测当前正在运行连接数的是否小于最大链接数,如果不小于则:等待或报raise TooManyConnections异常 # 否则 # 则优先去初始化时创建的链接中获取链接 SteadyDBConnection。 # 然后将SteadyDBConnection对象封装到PooledDedicatedDBConnection中并返回。 # 如果最开始创建的链接没有链接,则去创建一个SteadyDBConnection对象,再封装到PooledDedicatedDBConnection中并返回。 # 一旦关闭链接后,连接就返回到连接池让后续线程继续使用。 conn = POOL.connection() # print('连接被拿走了', conn._con) # print('池子里目前有', POOL._idle_cache, 'rn') cursor = conn.cursor() cursor.execute('select * from USER') result = cursor.fetchall() conn.close() return result result = func() print(result)
⚠️ Since the threadsafety value of pymysql, MySQLdb, etc. is 1, the threads in the connection pool in this mode will be shared by all threads, so Is thread safe. If there is no connection pool, when using pymysql to connect to the database, there is no problem with single-threaded applications. However, if multi-threaded applications are involved, locking is required. Once locked, the connections will inevitably be queued up. When there are many requests, performance will deteriorate. will be reduced.
3.3 Locking
#! /usr/bin/env python # -*- coding: utf-8 -*- # __author__ = "shuke" # Date: 2018/5/13 import pymysql import threading from threading import RLock LOCK = RLock() CONN = pymysql.connect(host='127.0.0.1', port=3306, user='zff', password='zff123', database='zff', charset='utf8') def task(arg): with LOCK: cursor = CONN.cursor() cursor.execute('select * from USER ') result = cursor.fetchall() cursor.close() print(result) for i in range(10): t = threading.Thread(target=task, args=(i,)) t.start()
3.4 No lock (error report)
#! /usr/bin/env python # -*- coding: utf-8 -*- # __author__ = "shuke" # Date: 2018/5/13 import pymysql import threading CONN = pymysql.connect(host='127.0.0.1', port=3306, user='zff', password='zff123', database='zff', charset='utf8') def task(arg): cursor = CONN.cursor() cursor.execute('select * from USER ') # cursor.execute('select sleep(10)') result = cursor.fetchall() cursor.close() print(result) for i in range(10): t = threading.Thread(target=task, args=(i,)) t.start()
At this time, you can check the connection status in the database: show status like 'Threads%';
4. Use the database connection pool in combination with pymsql
# cat sql_helper.py import pymysql import threading from DBUtils.PooledDB import PooledDB, SharedDBConnection POOL = PooledDB( creator=pymysql,# 使用链接数据库的模块 maxconnections=20,# 连接池允许的最大连接数,0和None表示不限制连接数 mincached=2,# 初始化时,链接池中至少创建的空闲的链接,0表示不创建 maxcached=5,# 链接池中最多闲置的链接,0和None不限制 #maxshared=3,# 链接池中最多共享的链接数量,0和None表示全部共享。PS: 无用,因为pymysql和MySQLdb等模块的 threadsafety都为1,所有值无论设置为多少,_maxcached永远为0,所以永远是所有链接都共享。 blocking=True,# 连接池中如果没有可用连接后,是否阻塞等待。True,等待;False,不等待然后报错 maxusage=None,# 一个链接最多被重复使用的次数,None表示无限制 setsession=[],# 开始会话前执行的命令列表。如:["set datestyle to ...", "set time zone ..."] ping=0, # ping MySQL服务端,检查是否服务可用。# 如:0 = None = never, 1 = default = whenever it is requested, 2 = when a cursor is created, 4 = when a query is executed, 7 = always host='192.168.11.38', port=3306, user='root', passwd='apNXgF6RDitFtDQx', db='m2day03db', charset='utf8' ) def connect(): # 创建连接 # conn = pymysql.connect(host='192.168.11.38', port=3306, user='root', passwd='apNXgF6RDitFtDQx', db='m2day03db') conn = POOL.connection() # 创建游标 cursor = conn.cursor(pymysql.cursors.DictCursor) return conn,cursor def close(conn,cursor): # 关闭游标 cursor.close() # 关闭连接 conn.close() def fetch_one(sql,args): conn,cursor = connect() # 执行SQL,并返回收影响行数 effect_row = cursor.execute(sql,args) result = cursor.fetchone() close(conn,cursor) return result def fetch_all(sql,args): conn, cursor = connect() # 执行SQL,并返回收影响行数 cursor.execute(sql,args) result = cursor.fetchall() close(conn, cursor) return result def insert(sql,args): """ 创建数据 :param sql: 含有占位符的SQL :return: """ conn, cursor = connect() # 执行SQL,并返回收影响行数 effect_row = cursor.execute(sql,args) conn.commit() close(conn, cursor) def delete(sql,args): """ 创建数据 :param sql: 含有占位符的SQL :return: """ conn, cursor = connect() # 执行SQL,并返回收影响行数 effect_row = cursor.execute(sql,args) conn.commit() close(conn, cursor) return effect_row def update(sql,args): conn, cursor = connect() # 执行SQL,并返回收影响行数 effect_row = cursor.execute(sql, args) conn.commit() close(conn, cursor) return effect_row
PS: You can use static methods to encapsulate it into a class for easy use
The above is the detailed content of Too complete! A collection of tutorials on using Python to operate MySQL!. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Pythoncanbeoptimizedformemory-boundoperationsbyreducingoverheadthroughgenerators,efficientdatastructures,andmanagingobjectlifetimes.First,usegeneratorsinsteadofliststoprocesslargedatasetsoneitematatime,avoidingloadingeverythingintomemory.Second,choos

Install pyodbc: Use the pipinstallpyodbc command to install the library; 2. Connect SQLServer: Use the connection string containing DRIVER, SERVER, DATABASE, UID/PWD or Trusted_Connection through the pyodbc.connect() method, and support SQL authentication or Windows authentication respectively; 3. Check the installed driver: Run pyodbc.drivers() and filter the driver name containing 'SQLServer' to ensure that the correct driver name is used such as 'ODBCDriver17 for SQLServer'; 4. Key parameters of the connection string

Create a seeder file: Use phpartisanmake:seederUserSeeder to generate the seeder class, and insert data through the model factory or database query in the run method; 2. Call other seeder in DatabaseSeeder: register UserSeeder, PostSeeder, etc. in order through $this->call() to ensure the dependency is correct; 3. Run seeder: execute phpartisandb:seed to run all registered seeders, or use phpartisanmigrate:fresh--seed to reset and refill the data; 4

Introduction to Statistical Arbitrage Statistical Arbitrage is a trading method that captures price mismatch in the financial market based on mathematical models. Its core philosophy stems from mean regression, that is, asset prices may deviate from long-term trends in the short term, but will eventually return to their historical average. Traders use statistical methods to analyze the correlation between assets and look for portfolios that usually change synchronously. When the price relationship of these assets is abnormally deviated, arbitrage opportunities arise. In the cryptocurrency market, statistical arbitrage is particularly prevalent, mainly due to the inefficiency and drastic fluctuations of the market itself. Unlike traditional financial markets, cryptocurrencies operate around the clock and their prices are highly susceptible to breaking news, social media sentiment and technology upgrades. This constant price fluctuation frequently creates pricing bias and provides arbitrageurs with

iter() is used to obtain the iterator object, and next() is used to obtain the next element; 1. Use iterator() to convert iterable objects such as lists into iterators; 2. Call next() to obtain elements one by one, and trigger StopIteration exception when the elements are exhausted; 3. Use next(iterator, default) to avoid exceptions; 4. Custom iterators need to implement the __iter__() and __next__() methods to control iteration logic; using default values is a common way to safe traversal, and the entire mechanism is concise and practical.

Use psycopg2.pool.SimpleConnectionPool to effectively manage database connections and avoid the performance overhead caused by frequent connection creation and destruction. 1. When creating a connection pool, specify the minimum and maximum number of connections and database connection parameters to ensure that the connection pool is initialized successfully; 2. Get the connection through getconn(), and use putconn() to return the connection to the pool after executing the database operation. Constantly call conn.close() is prohibited; 3. SimpleConnectionPool is thread-safe and is suitable for multi-threaded environments; 4. It is recommended to implement a context manager in combination with context manager to ensure that the connection can be returned correctly when exceptions are noted;

TosecureMySQLeffectively,useobject-levelprivilegestolimituseraccessbasedontheirspecificneeds.Beginbyunderstandingthatobject-levelprivilegesapplytodatabases,tables,orcolumns,offeringfinercontrolthanglobalprivileges.Next,applytheprincipleofleastprivile

MySQL replication filtering can be configured in the main library or slave library. The main library controls binlog generation through binlog-do-db or binlog-ignore-db, which is suitable for reducing log volume; the data application is controlled by replicate-do-db, replicate-ignore-db, replicate-do-table, replicate-ignore-table and wildcard rules replicate-wild-do-table and replicate-wild-ignore-table. It is more flexible and conducive to data recovery. When configuring, you need to pay attention to the order of rules, cross-store statement behavior,
