My needs require data query and correlation between multiple databases, so I chose to use pandas, read the data through read_sql and process it into the dataframe to directly generate the target data. But we are currently encountering a problem: read_sql is very slow. For example, it takes 4 and a half minutes to read a 37W data volume (22 fields) table into a dataframe in the Oracle library. code show as below:
import pandas as pd import sqlalchemy as sql ora_engine=sql.create_engine('oracle://test01:test01@test01db') ora_df1=pd.read_sql('select * from target_table1',ora_engine)
It took 4 minutes and 32 seconds
Even if I use another simple and crude method, it will be much faster than read_sql. code show as below:
import pandas as pd import sqlalchemy as sql ora_engine=sql.create_engine('oracle://test01:test01@test01db') conn=ora_engine.raw_connection() cursor=conn.cursor() queryset=cursor.execute('select * from target_table1') columns=[for i[0] in queryset.description] df_data=queryset.fetchall() ora_df1=pd.DataFrame() ora_df1.columns=columns ora_df1.append(df_data)
It took 1 minute and 31 seconds
I would like to ask everyone here if there is any way to optimize and improve the speed of read_sql in pandas. Thank you very much~