
1. Generate data table
1. First import the pandas library. Generally, the numpy library is used, so Let’s import the backup first:
import numpy as np import pandas as pd
2. Import CSV or xlsx file:
df = pd.DataFrame(pd.read_csv('name.csv',header=1)) df = pd.DataFrame(pd.read_excel('name.xlsx'))
3. Use pandas to create a data table:
df = pd.DataFrame({"id":[1001,1002,1003,1004,1005,1006],
"date":pd.date_range('20130102', periods=6),
"city":['Beijing ', 'SH', ' guangzhou ', 'Shenzhen', 'shanghai', 'BEIJING '],
"age":[23,44,54,32,34,32],
"category":['100-A','100-B','110-A','110-C','210-A','130-F'],
"price":[1200,np.nan,2133,5433,np.nan,4432]},
columns =['id','date','city','category','age','price'])2. Data table information View
1. Dimension view:
df.shape
2. Basic information of the data table (dimension, column name, data format, occupied space, etc.):
df.info()
3. The format of each column of data:
df.dtypes
4. The format of a certain column:
df['B'].dtype
5. Null value:
df.isnull()
6. View the null value of a certain column:
df.isnull()
7. View the unique value of a column:
df['B'].unique()
8. View the value of the data table:
df.values
9. View the column name:
df.columns
10 , View the first 10 rows of data and the last 10 rows of data:
df.head() #默认前10行数据 df.tail() #默认后10 行数据
Related recommendations: "Python Video Tutorial"
3. Data table cleaning
1. Fill the empty values with the number 0:
df.fillna(value=0)
2. Use the mean value of the column prince to fill the NA:
df['prince'].fillna(df['prince'].mean())
3. Clear the character spaces in the city field:
df['city']=df['city'].map(str.strip)
4. Case conversion:
df['city']=df['city'].str.lower()
5. Change data format:
df['price'].astype('int')
6. Change column name:
df.rename(columns={'category': 'category-size'})7. After deletion Duplicate values that appear:
df['city'].drop_duplicates()
8. Delete duplicate values that appear first:
df['city'].drop_duplicates(keep='last')
9. Data replacement:
df['city'].replace('sh', 'shanghai')
4. Data preprocessing
df1=pd.DataFrame({"id":[1001,1002,1003,1004,1005,1006,1007,1008],
"gender":['male','female','male','female','male','female','male','female'],
"pay":['Y','N','Y','Y','N','Y','N','Y',],
"m-point":[10,12,20,40,40,40,30,20]})1. Merge data tables
df_inner=pd.merge(df,df1,how='inner') # 匹配合并,交集 df_left=pd.merge(df,df1,how='left') # df_right=pd.merge(df,df1,how='right') df_outer=pd.merge(df,df1,how='outer') #并集
2. Set index columns
df_inner.set_index('id')
3. Sort by the value of a specific column:
df_inner.sort_values(by=['age'])
4. Sort by index column:
df_inner.sort_index()
5. If the value of the prince column is >3000, the group column displays high, otherwise it displays low:
df_inner['group'] = np.where(df_inner['price'] > 3000,'high','low')
6. Group data that combines multiple conditions Mark
df_inner.loc[(df_inner['city'] == 'beijing') & (df_inner['price'] >= 4000), 'sign']=1
7. Sort the values of the category field into columns in sequence and create a data table. The index value is the index column of df_inner. The column names are category and size
pd.DataFrame((x.split('-') for x in df_inner['category']),index=df_inner.index,columns=['category','size']))
8. It will be completed. Match the split data table with the original df_inner data table
df_inner=pd.merge(df_inner,split,right_index=True, left_index=True)
5. Data extraction
The three main functions used: loc, iloc and ix, loc The function extracts by label value, iloc extracts by position, and ix can extract by label and position at the same time.
1. Extract the value of a single row by index
df_inner.loc[3]
2. Extract the value of a regional row by index
df_inner.iloc[0:5]
3. Reset the index
df_inner.reset_index()
4. Set date as index
df_inner=df_inner.set_index('date')
5. Extract all data before 4 days
df_inner[:'2013-01-04']
6. Use iloc to extract data by location area
df_inner.iloc[:3,:2] #冒号前后的数字不再是索引的标签名称,而是数据所在的位置,从0开始,前三行,前两列。
7. Adapt iloc individually by location File data
df_inner.iloc[[0,2,5],[4,5]] #提取第0、2、5行,4、5列
8. Use ix to extract data by index label and position mixture
df_inner.ix[:'2013-01-03',:4] #2013-01-03号之前,前四列数据
9. Determine whether the value of the city column is Beijing
df_inner['city'].isin(['beijing'])
10. Determine the city column contains beijing and shanghai, and then extract the data that meets the conditions
df_inner.loc[df_inner['city'].isin(['beijing','shanghai'])]
11. Extract the first three characters and generate a data table
pd.DataFrame(category.str[:3])
6. Data filtering
Use the three conditions of AND, OR, NOT and greater than, less than, and equal to filter the data, and perform counting and summing.
1. Use "AND" to filter
df_inner.loc[(df_inner['age'] > 25) & (df_inner['city'] == 'beijing'), ['id','city','age','category','gender']]
2. Use "OR" to filter
df_inner.loc[(df_inner['age'] > 25) | (df_inner['city'] == 'beijing'), ['id','city','age','category','gender']] .sort(['age'])
3. Use "NOT" condition to filter
df_inner.loc[(df_inner['city'] != 'beijing'), ['id','city','age','category','gender']].sort(['id'])
4. Count the filtered data by city column
df_inner.loc[(df_inner['city'] != 'beijing'), ['id','city','age','category','gender']].sort(['id']).city.count()
5. Use query function to filter
df_inner.query('city == ["beijing", "shanghai"]')
6. Sum the filtered results by prince
df_inner.query('city == ["beijing", "shanghai"]').price.sum()
7. Data summary
The main functions are groupby and pivot_table
1. Count and summarize all columns
df_inner.groupby('city').count()
2. Count the id field by city
df_inner.groupby('city')['id'].count()
3. Summarize the two fields
df_inner.groupby(['city','size'])['id'].count()
4. Summarize the city field and calculate the total and mean of prince respectively
df_inner.groupby('city')['price'].agg([len,np.sum, np.mean])
8. Data statistics
Data sampling, calculation of standard deviation, covariance and correlation coefficient
1. Simple data sampling
df_inner.sample(n=3)
2. Manually set the sampling weight
weights = [0, 0, 0, 0, 0.5, 0.5] df_inner.sample(n=2, weights=weights)
3. No replacement after sampling
df_inner.sample(n=6, replace=False)
4. Replacement after sampling
df_inner.sample(n=6, replace=True)
5. Descriptive statistics of data table
df_inner.describe().round(2).T #round函数设置显示小数位,T表示转置
6. Calculate the standard deviation of a column
df_inner['price'].std()
7. Calculate the covariance between two fields
df_inner['price'].cov(df_inner['m-point'])
8. Calculate the covariance between all fields in the data table
df_inner.cov()
9. Correlation analysis of two fields
df_inner['price'].corr(df_inner['m-point']) #相关系数在-1到1之间,接近1为正相关,接近-1为负相关,0为不相关
10. Correlation analysis of data table
df_inner.corr()
9. Data output
The analyzed data can be output to xlsx format and csv format
1, written to Excel
df_inner.to_excel('excel_to_python.xlsx', sheet_name='bluewhale_cc')
2, written to CSV
df_inner.to_csv('excel_to_python.csv')
The above is the detailed content of The most complete summary of Python pandas usage. For more information, please follow other related articles on the PHP Chinese website!
Python vs. C : Learning Curves and Ease of UseApr 19, 2025 am 12:20 AMPython is easier to learn and use, while C is more powerful but complex. 1. Python syntax is concise and suitable for beginners. Dynamic typing and automatic memory management make it easy to use, but may cause runtime errors. 2.C provides low-level control and advanced features, suitable for high-performance applications, but has a high learning threshold and requires manual memory and type safety management.
Python vs. C : Memory Management and ControlApr 19, 2025 am 12:17 AMPython and C have significant differences in memory management and control. 1. Python uses automatic memory management, based on reference counting and garbage collection, simplifying the work of programmers. 2.C requires manual management of memory, providing more control but increasing complexity and error risk. Which language to choose should be based on project requirements and team technology stack.
Python for Scientific Computing: A Detailed LookApr 19, 2025 am 12:15 AMPython's applications in scientific computing include data analysis, machine learning, numerical simulation and visualization. 1.Numpy provides efficient multi-dimensional arrays and mathematical functions. 2. SciPy extends Numpy functionality and provides optimization and linear algebra tools. 3. Pandas is used for data processing and analysis. 4.Matplotlib is used to generate various graphs and visual results.
Python and C : Finding the Right ToolApr 19, 2025 am 12:04 AMWhether to choose Python or C depends on project requirements: 1) Python is suitable for rapid development, data science, and scripting because of its concise syntax and rich libraries; 2) C is suitable for scenarios that require high performance and underlying control, such as system programming and game development, because of its compilation and manual memory management.
Python for Data Science and Machine LearningApr 19, 2025 am 12:02 AMPython is widely used in data science and machine learning, mainly relying on its simplicity and a powerful library ecosystem. 1) Pandas is used for data processing and analysis, 2) Numpy provides efficient numerical calculations, and 3) Scikit-learn is used for machine learning model construction and optimization, these libraries make Python an ideal tool for data science and machine learning.
Learning Python: Is 2 Hours of Daily Study Sufficient?Apr 18, 2025 am 12:22 AMIs it enough to learn Python for two hours a day? It depends on your goals and learning methods. 1) Develop a clear learning plan, 2) Select appropriate learning resources and methods, 3) Practice and review and consolidate hands-on practice and review and consolidate, and you can gradually master the basic knowledge and advanced functions of Python during this period.
Python for Web Development: Key ApplicationsApr 18, 2025 am 12:20 AMKey applications of Python in web development include the use of Django and Flask frameworks, API development, data analysis and visualization, machine learning and AI, and performance optimization. 1. Django and Flask framework: Django is suitable for rapid development of complex applications, and Flask is suitable for small or highly customized projects. 2. API development: Use Flask or DjangoRESTFramework to build RESTfulAPI. 3. Data analysis and visualization: Use Python to process data and display it through the web interface. 4. Machine Learning and AI: Python is used to build intelligent web applications. 5. Performance optimization: optimized through asynchronous programming, caching and code
Python vs. C : Exploring Performance and EfficiencyApr 18, 2025 am 12:20 AMPython is better than C in development efficiency, but C is higher in execution performance. 1. Python's concise syntax and rich libraries improve development efficiency. 2.C's compilation-type characteristics and hardware control improve execution performance. When making a choice, you need to weigh the development speed and execution efficiency based on project needs.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

WebStorm Mac version
Useful JavaScript development tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.






