目錄
建立mysql資料表
第一版:
第二版:
第三版:
首頁 資料庫 mysql教程 Python怎麼爬取京東商品資訊評論存並進MySQL

Python怎麼爬取京東商品資訊評論存並進MySQL

May 26, 2023 pm 07:58 PM
mysql python

建立mysql資料表

問題:使用SQL alchemy時,非主鍵不能設定為自增長,但是我想讓這個非主鍵只是為了作為索引,autoincrement=True無效,該怎麼實現讓它自增長?

from sqlalchemy import String,Integer,Text,Column
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import scoped_session
from sqlalchemy.ext.declarative import declarative_base
 
engine=create_engine(
    "mysql+pymysql://root:root@127.0.0.1:3306/jdcrawl?charset=utf8",
    pool_size=200,
    max_overflow=300,
    echo=False
)
 
BASE=declarative_base() # 实例化
 
class Goods(BASE):
    __tablename__='goods'
    id=Column(Integer(),primary_key=True,autoincrement=True)
    sku_id = Column(String(200), primary_key=True, autoincrement=False)
    name=Column(String(200))
    price=Column(String(200))
    comments_num=Column(Integer)
    shop=Column(String(200))
    link=Column(String(200))
 
class Comments(BASE):
    __tablename__='comments'
    id=Column(Integer(),primary_key=True,autoincrement=True,nullable=False)
    sku_id=Column(String(200),primary_key=True,autoincrement=False)
    comments=Column(Text())
 
BASE.metadata.create_all(engine)
Session=sessionmaker(engine)
sess_db=scoped_session(Session)

第一版:

問題:爬取幾頁評論後就會爬取到空白頁,添加refer後依舊如此

嘗試解決方法:將獲取評論地方的線程池改為單線程,並每獲取一頁評論增加延時1s

# 不能爬太快!!!不然获取不到评论
 
from bs4 import BeautifulSoup
import requests
from urllib import parse
import csv,json,re
import threadpool
import time
from jd_mysqldb import Goods,Comments,sess_db
 
headers={
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36',
    'Cookie': '__jdv=76161171|baidu|-|organic|%25E4%25BA%25AC%25E4%25B8%259C|1613711947911; __jdu=16137119479101182770449; areaId=7; ipLoc-djd=7-458-466-0; PCSYCityID=CN_410000_0_0; shshshfpa=07383463-032f-3f99-9d40-639cb57c6e28-1613711950; shshshfpb=u8S9UvxK66gfIbM1mUNrIOg%3D%3D; user-key=153f6b4d-0704-4e56-82b6-8646f3f0dad4; cn=0; shshshfp=9a88944b34cb0ff3631a0a95907b75eb; __jdc=122270672; 3AB9D23F7A4B3C9B=SEELVNXBPU7OAA3UX5JTKR5LQADM5YFJRKY23Z6HDBU4OT2NWYGX525CKFFVHTRDJ7Q5DJRMRZQIQJOW5GVBY43XVI; jwotest_product=99; __jda=122270672.16137119479101182770449.1613711948.1613738165.1613748918.4; JSESSIONID=C06EC8D2E9384D2628AE22B1A6F9F8FC.s1; shshshsID=ab2ca3143928b1b01f6c5b71a15fcebe_5_1613750374847; __jdb=122270672.5.16137119479101182770449|4.1613748918',
    'Referer': 'https://www.jd.com/'
}
 
num=0   # 商品数量
comments_num=0   # 评论数量
 
# 获取商品信息和SkuId
def getIndex(url):
    session=requests.Session()
    session.headers=headers
    global num
    res=session.get(url,headers=headers)
    print(res.status_code)
    res.encoding=res.apparent_encoding
    soup=BeautifulSoup(res.text,'lxml')
    items=soup.select('li.gl-item')
    for item in items[:3]:  # 爬取3个商品测试
        title=item.select_one('.p-name a em').text.strip().replace(' ','')
        price=item.select_one('.p-price strong').text.strip().replace('¥','')
        try:
            shop=item.select_one('.p-shopnum a').text.strip()   # 获取书籍时查找店铺的方法
        except:
            shop=item.select_one('.p-shop a').text.strip()  #   获取其他商品时查找店铺的方法
        link=parse.urljoin('https://',item.select_one('.p-img a').get('href'))
        SkuId=re.search('\d+',link).group()
        comments_num=getCommentsNum(SkuId,session)
        print(SkuId,title, price, shop, link, comments_num)
        print("开始存入数据库...")
        try:
            IntoGoods(SkuId,title, price, shop, link, comments_num)
        except Exception as e:
            print(e)
            sess_db.rollback()
        num += 1
        print("正在获取评论...")
        # 获取评论总页数
        url1 = f'https://club.jd.com/comment/productPageComments.action?productId={SkuId}&score=0&sortType=5&page=0&pageSize=10'
        headers['Referer'] = f'https://item.jd.com/{SkuId}.html'
        headers['Connection']='keep-alive'
        res2 = session.get(url1,headers=headers)
        res2.encoding = res2.apparent_encoding
        json_data = json.loads(res2.text)
        max_page = json_data['maxPage']  # 经测试最多可获取100页评论,每页10条
        args = []
        for i in range(0, max_page):
            # 使用此链接获取评论得到的为json格式
            url2 = f'https://club.jd.com/comment/productPageComments.action?productId={SkuId}&score=0&sortType=5&page={i}&pageSize=10'
            # 使用此链接获取评论得到的非json格式,需要提取
            # url2_2=f'https://club.jd.com/comment/productPageComments.action?callback=jQuery9287224&productId={SkuId}&score=0&sortType=5&page={i}&pageSize=10'
            args.append(([session,SkuId,url2], None))
        pool2 = threadpool.ThreadPool(2)   # 2个线程
        reque2 = threadpool.makeRequests(getComments,args)  # 创建任务
        for r in reque2:
            pool2.putRequest(r) # 提交任务到线程池
        pool2.wait()
 
# 获取评论总数量
def getCommentsNum(SkuId,sess):
    headers['Referer']=f'https://item.jd.com/{SkuId}.html'
    url=f'https://club.jd.com/comment/productCommentSummaries.action?referenceIds={SkuId}'
    res=sess.get(url,headers=headers)
    try:
        res.encoding=res.apparent_encoding
        json_data=json.loads(res.text)  # json格式转为字典
        num=json_data['CommentsCount'][0]['CommentCount']
        return num
    except:
        return 'Error'
 
# 获取评论
def getComments(sess,SkuId,url2):
    global comments_num
    print(url2)
    headers['Referer'] = f'https://item.jd.com/{SkuId}.html'
    res2 = sess.get(url2,headers=headers)
    res2.encoding='gbk'
    json_data=res2.text
    '''
    # 如果用url2_2需要进行如下操作提取json
    start = res2.text.find('jQuery9287224(') + len('jQuery9287224(')
    end = res2.text.find(');')
    json_data=res2.text[start:end]
    '''
    dict_data = json.loads(json_data)
    try:
        comments=dict_data['comments']
        for item in comments:
            comment=item['content'].replace('\n','')
            # print(comment)
            comments_num+=1
            try:
                IntoComments(SkuId,comment)
            except Exception as e:
                print(e)
                sess_db.rollback()
    except:
        pass
 
# 商品信息入库
def IntoGoods(SkuId,title, price, shop, link, comments_num):
    goods_data=Goods(
        sku_id=SkuId,
        name=title,
        price=price,
        comments_num=comments_num,
        shop=shop,
        link=link
    )
    sess_db.add(goods_data)
    sess_db.commit()
 
# 评论入库
def IntoComments(SkuId,comment):
    comments_data=Comments(
        sku_id=SkuId,
        comments=comment
    )
    sess_db.add(comments_data)
    sess_db.commit()
 
if __name__ == '__main__':
    start_time=time.time()
    urls=[]
    KEYWORD=parse.quote(input("请输入要查询的关键词:"))
    for i in range(1,2):    # 爬取一页进行测试
        url=f'https://search.jd.com/Search?keyword={KEYWORD}&wq={KEYWORD}&page={i}'
        urls.append(([url,],None))  # threadpool要求必须这样写
    pool=threadpool.ThreadPool(2)  # 2个线程的线程池
    reque=threadpool.makeRequests(getIndex,urls)    # 创建任务
    for r in reque:
        pool.putRequest(r)  # 向线程池提交任务
    pool.wait() # 等待所有任务执行完毕
    print("共获取{}件商品,获得{}条评论,耗时{}".format(num,comments_num,time.time()-start_time))

第二版:

經測試,的確不會出現空白頁的情況

進一步優化:同時取得2個以上商品的評論

# 不能爬太快!!!不然获取不到评论
from bs4 import BeautifulSoup
import requests
from urllib import parse
import csv,json,re
import threadpool
import time
from jd_mysqldb import Goods,Comments,sess_db
 
headers={
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36',
    'Cookie': '__jdv=76161171|baidu|-|organic|%25E4%25BA%25AC%25E4%25B8%259C|1613711947911; __jdu=16137119479101182770449; areaId=7; ipLoc-djd=7-458-466-0; PCSYCityID=CN_410000_0_0; shshshfpa=07383463-032f-3f99-9d40-639cb57c6e28-1613711950; shshshfpb=u8S9UvxK66gfIbM1mUNrIOg%3D%3D; user-key=153f6b4d-0704-4e56-82b6-8646f3f0dad4; cn=0; shshshfp=9a88944b34cb0ff3631a0a95907b75eb; __jdc=122270672; 3AB9D23F7A4B3C9B=SEELVNXBPU7OAA3UX5JTKR5LQADM5YFJRKY23Z6HDBU4OT2NWYGX525CKFFVHTRDJ7Q5DJRMRZQIQJOW5GVBY43XVI; jwotest_product=99; __jda=122270672.16137119479101182770449.1613711948.1613738165.1613748918.4; JSESSIONID=C06EC8D2E9384D2628AE22B1A6F9F8FC.s1; shshshsID=ab2ca3143928b1b01f6c5b71a15fcebe_5_1613750374847; __jdb=122270672.5.16137119479101182770449|4.1613748918',
    'Referer': 'https://www.jd.com/'
}
 
num=0   # 商品数量
comments_num=0   # 评论数量
 
# 获取商品信息和SkuId
def getIndex(url):
    session=requests.Session()
    session.headers=headers
    global num
    res=session.get(url,headers=headers)
    print(res.status_code)
    res.encoding=res.apparent_encoding
    soup=BeautifulSoup(res.text,'lxml')
    items=soup.select('li.gl-item')
    for item in items[:2]:  # 爬取2个商品测试
        title=item.select_one('.p-name a em').text.strip().replace(' ','')
        price=item.select_one('.p-price strong').text.strip().replace('¥','')
        try:
            shop=item.select_one('.p-shopnum a').text.strip()   # 获取书籍时查找店铺的方法
        except:
            shop=item.select_one('.p-shop a').text.strip()  #   获取其他商品时查找店铺的方法
        link=parse.urljoin('https://',item.select_one('.p-img a').get('href'))
        SkuId=re.search('\d+',link).group()
        headers['Referer'] = f'https://item.jd.com/{SkuId}.html'
        headers['Connection'] = 'keep-alive'
        comments_num=getCommentsNum(SkuId,session)
        print(SkuId,title, price, shop, link, comments_num)
        print("开始将商品存入数据库...")
        try:
            IntoGoods(SkuId,title, price, shop, link, comments_num)
        except Exception as e:
            print(e)
            sess_db.rollback()
        num += 1
        print("正在获取评论...")
        # 获取评论总页数
        url1 = f'https://club.jd.com/comment/productPageComments.action?productId={SkuId}&score=0&sortType=5&page=0&pageSize=10'
        res2 = session.get(url1,headers=headers)
        res2.encoding = res2.apparent_encoding
        json_data = json.loads(res2.text)
        max_page = json_data['maxPage']  # 经测试最多可获取100页评论,每页10条
        print("{}评论共{}页".format(SkuId,max_page))
        if max_page==0:
            IntoComments(SkuId,'0')
        else:
            for i in range(0, max_page):
                # 使用此链接获取评论得到的为json格式
                url2 = f'https://club.jd.com/comment/productPageComments.action?productId={SkuId}&score=0&sortType=5&page={i}&pageSize=10'
                # 使用此链接获取评论得到的非json格式,需要提取
                # url2_2=f'https://club.jd.com/comment/productPageComments.action?callback=jQuery9287224&productId={SkuId}&score=0&sortType=5&page={i}&pageSize=10'
                print("开始获取第{}页评论:{}".format(i+1,url2) )
                getComments(session,SkuId,url2)
                time.sleep(1)
 
# 获取评论总数量
def getCommentsNum(SkuId,sess):
    url=f'https://club.jd.com/comment/productCommentSummaries.action?referenceIds={SkuId}'
    res=sess.get(url)
    try:
        res.encoding=res.apparent_encoding
        json_data=json.loads(res.text)  # json格式转为字典
        num=json_data['CommentsCount'][0]['CommentCount']
        return num
    except:
        return 'Error'
 
# 获取评论
def getComments(sess,SkuId,url2):
    global comments_num
    res2 = sess.get(url2)
    res2.encoding=res2.apparent_encoding
    json_data=res2.text
    '''
    # 如果用url2_2需要进行如下操作提取json
    start = res2.text.find('jQuery9287224(') + len('jQuery9287224(')
    end = res2.text.find(');')
    json_data=res2.text[start:end]
    '''
    dict_data = json.loads(json_data)
    comments=dict_data['comments']
    for item in comments:
        comment=item['content'].replace('\n','')
        # print(comment)
        comments_num+=1
        try:
            IntoComments(SkuId,comment)
        except Exception as e:
            print(e)
            sess_db.rollback()
 
# 商品信息入库
def IntoGoods(SkuId,title, price, shop, link, comments_num):
    goods_data=Goods(
        sku_id=SkuId,
        name=title,
        price=price,
        comments_num=comments_num,
        shop=shop,
        link=link
    )
    sess_db.add(goods_data)
    sess_db.commit()
 
# 评论入库
def IntoComments(SkuId,comment):
    comments_data=Comments(
        sku_id=SkuId,
        comments=comment
    )
    sess_db.add(comments_data)
    sess_db.commit()
 
if __name__ == '__main__':
    start_time=time.time()
    urls=[]
    KEYWORD=parse.quote(input("请输入要查询的关键词:"))
    for i in range(1,2):    # 爬取一页进行测试
        url=f'https://search.jd.com/Search?keyword={KEYWORD}&wq={KEYWORD}&page={i}'
        urls.append(([url,],None))  # threadpool要求必须这样写
    pool=threadpool.ThreadPool(2)  # 2个线程的线程池
    reque=threadpool.makeRequests(getIndex,urls)    # 创建任务
    for r in reque:
        pool.putRequest(r)  # 向线程池提交任务
    pool.wait() # 等待所有任务执行完毕
    print("共获取{}件商品,获得{}条评论,耗时{}".format(num,comments_num,time.time()-start_time))

第三版:

。 。 。 。不行,又出現空白頁了

# 不能爬太快!!!不然获取不到评论
from bs4 import BeautifulSoup
import requests
from urllib import parse
import csv,json,re
import threadpool
import time
from jd_mysqldb import Goods,Comments,sess_db
 
headers={
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36',
    'Cookie': '__jdv=76161171|baidu|-|organic|%25E4%25BA%25AC%25E4%25B8%259C|1613711947911; __jdu=16137119479101182770449; areaId=7; ipLoc-djd=7-458-466-0; PCSYCityID=CN_410000_0_0; shshshfpa=07383463-032f-3f99-9d40-639cb57c6e28-1613711950; shshshfpb=u8S9UvxK66gfIbM1mUNrIOg%3D%3D; user-key=153f6b4d-0704-4e56-82b6-8646f3f0dad4; cn=0; shshshfp=9a88944b34cb0ff3631a0a95907b75eb; __jdc=122270672; 3AB9D23F7A4B3C9B=SEELVNXBPU7OAA3UX5JTKR5LQADM5YFJRKY23Z6HDBU4OT2NWYGX525CKFFVHTRDJ7Q5DJRMRZQIQJOW5GVBY43XVI; jwotest_product=99; __jda=122270672.16137119479101182770449.1613711948.1613738165.1613748918.4; JSESSIONID=C06EC8D2E9384D2628AE22B1A6F9F8FC.s1; shshshsID=ab2ca3143928b1b01f6c5b71a15fcebe_5_1613750374847; __jdb=122270672.5.16137119479101182770449|4.1613748918',
    'Referer': 'https://www.jd.com/'
}
 
num=0   # 商品数量
comments_num=0   # 评论数量
 
# 获取商品信息和SkuId
def getIndex(url):
    global num
    skuids=[]
    session=requests.Session()
    session.headers=headers
    res=session.get(url,headers=headers)
    print(res.status_code)
    res.encoding=res.apparent_encoding
    soup=BeautifulSoup(res.text,'lxml')
    items=soup.select('li.gl-item')
    for item in items[:3]:  # 爬取3个商品测试
        title=item.select_one('.p-name a em').text.strip().replace(' ','')
        price=item.select_one('.p-price strong').text.strip().replace('¥','')
        try:
            shop=item.select_one('.p-shopnum a').text.strip()   # 获取书籍时查找店铺的方法
        except:
            shop=item.select_one('.p-shop a').text.strip()  #   获取其他商品时查找店铺的方法
        link=parse.urljoin('https://',item.select_one('.p-img a').get('href'))
        SkuId=re.search('\d+',link).group()
        skuids.append(([SkuId,session],None))
        headers['Referer'] = f'https://item.jd.com/{SkuId}.html'
        headers['Connection'] = 'keep-alive'
        comments_num=getCommentsNum(SkuId,session)  # 评论数量
        print(SkuId,title, price, shop, link, comments_num)
        print("开始将商品存入数据库...")
        try:
            IntoGoods(SkuId,title, price, shop, link, comments_num)
        except Exception as e:
            print(e)
            sess_db.rollback()
        num += 1
    print("开始获取评论并存入数据库...")
    pool2=threadpool.ThreadPool(3)   # 可同时获取3个商品的评论
    task=threadpool.makeRequests(getComments,skuids)
    for r in task:
        pool2.putRequest(r)
    pool2.wait()
 
# 获取评论
def getComments(SkuId,sess):
    # 获取评论总页数
    url1 = f'https://club.jd.com/comment/productPageComments.action?productId={SkuId}&score=0&sortType=5&page=0&pageSize=10'
    res2 = sess.get(url1, headers=headers)
    res2.encoding = res2.apparent_encoding
    json_data = json.loads(res2.text)
    max_page = json_data['maxPage']  # 经测试最多可获取100页评论,每页10条
    print("{}评论共{}页".format(SkuId, max_page))
    if max_page == 0:
        IntoComments(SkuId, '0')
    else:
        for i in range(0, max_page):
            # 使用此链接获取评论得到的为json格式
            url2 = f'https://club.jd.com/comment/productPageComments.action?productId={SkuId}&score=0&sortType=5&page={i}&pageSize=10'
            # 使用此链接获取评论得到的非json格式,需要提取
            # url2_2=f'https://club.jd.com/comment/productPageComments.action?callback=jQuery9287224&productId={SkuId}&score=0&sortType=5&page={i}&pageSize=10'
            print("开始获取第{}页评论:{}".format(i + 1, url2))
            getComments_one(sess, SkuId, url2)
            time.sleep(1)
 
# 获取评论总数量
def getCommentsNum(SkuId,sess):
    url=f'https://club.jd.com/comment/productCommentSummaries.action?referenceIds={SkuId}'
    res=sess.get(url)
    try:
        res.encoding=res.apparent_encoding
        json_data=json.loads(res.text)  # json格式转为字典
        num=json_data['CommentsCount'][0]['CommentCount']
        return num
    except:
        return 'Error'
 
# 获取单个评论
def getComments_one(sess,SkuId,url2):
    global comments_num
    res2 = sess.get(url2)
    res2.encoding=res2.apparent_encoding
    json_data=res2.text
    '''
    # 如果用url2_2需要进行如下操作提取json
    start = res2.text.find('jQuery9287224(') + len('jQuery9287224(')
    end = res2.text.find(');')
    json_data=res2.text[start:end]
    '''
    dict_data = json.loads(json_data)
    comments=dict_data['comments']
    for item in comments:
        comment=item['content'].replace('\n','')
        # print(comment)
        comments_num+=1
        try:
            IntoComments(SkuId,comment)
        except Exception as e:
            print(e)
            print("rollback!")
            sess_db.rollback()
 
# 商品信息入库
def IntoGoods(SkuId,title, price, shop, link, comments_num):
    goods_data=Goods(
        sku_id=SkuId,
        name=title,
        price=price,
        comments_num=comments_num,
        shop=shop,
        link=link
    )
    sess_db.add(goods_data)
    sess_db.commit()
 
# 评论入库
def IntoComments(SkuId,comment):
    comments_data=Comments(
        sku_id=SkuId,
        comments=comment
    )
    sess_db.add(comments_data)
    sess_db.commit()
 
if __name__ == '__main__':
    start_time=time.time()
    urls=[]
    KEYWORD=parse.quote(input("请输入要查询的关键词:"))
    for i in range(1,2):    # 爬取一页进行测试
        url=f'https://search.jd.com/Search?keyword={KEYWORD}&wq={KEYWORD}&page={i}'
        urls.append(([url,],None))  # threadpool要求必须这样写
    pool=threadpool.ThreadPool(2)  # 2个线程的线程池
    reque=threadpool.makeRequests(getIndex,urls)    # 创建任务
    for r in reque:
        pool.putRequest(r)  # 向线程池提交任务
    pool.wait() # 等待所有任务执行完毕
    print("共获取{}件商品,获得{}条评论,耗时{}".format(num,comments_num,time.time()-start_time))

以上是Python怎麼爬取京東商品資訊評論存並進MySQL的詳細內容。更多資訊請關注PHP中文網其他相關文章!

本網站聲明
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn

熱AI工具

Undress AI Tool

Undress AI Tool

免費脫衣圖片

Undresser.AI Undress

Undresser.AI Undress

人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover

AI Clothes Remover

用於從照片中去除衣服的線上人工智慧工具。

Clothoff.io

Clothoff.io

AI脫衣器

Video Face Swap

Video Face Swap

使用我們完全免費的人工智慧換臉工具,輕鬆在任何影片中換臉!

熱門文章

Rimworld Odyssey溫度指南和Gravtech
1 個月前 By Jack chen
Rimworld Odyssey如何釣魚
1 個月前 By Jack chen
我可以有兩個支付帳戶嗎?
1 個月前 By 下次还敢
初學者的Rimworld指南:奧德賽
1 個月前 By Jack chen
PHP變量範圍解釋了
3 週前 By 百草

熱工具

記事本++7.3.1

記事本++7.3.1

好用且免費的程式碼編輯器

SublimeText3漢化版

SublimeText3漢化版

中文版,非常好用

禪工作室 13.0.1

禪工作室 13.0.1

強大的PHP整合開發環境

Dreamweaver CS6

Dreamweaver CS6

視覺化網頁開發工具

SublimeText3 Mac版

SublimeText3 Mac版

神級程式碼編輯軟體(SublimeText3)

熱門話題

Laravel 教程
1603
29
PHP教程
1506
276
如何故障排除常見的mySQL連接錯誤? 如何故障排除常見的mySQL連接錯誤? Aug 08, 2025 am 06:44 AM

檢查MySQL服務是否運行,使用sudosystemctlstatusmysql確認並啟動;2.確保bind-address設置為0.0.0.0以允許遠程連接,並重啟服務;3.驗證3306端口是否開放,通過netstat檢查並配置防火牆規則允許該端口;4.對於“Accessdenied”錯誤,需核對用戶名、密碼和主機名,登錄MySQL後查詢mysql.user表確認權限,必要時創建或更新用戶並授權,如使用'your_user'@'%';5.若因caching_sha2_password導致認證失

在Python中調試內存洩漏的常見策略是什麼? 在Python中調試內存洩漏的常見策略是什麼? Aug 06, 2025 pm 01:43 PM

Usetracemalloctotrackmemoryallocationsandidentifyhigh-memorylines;2.Monitorobjectcountswithgcandobjgraphtodetectgrowingobjecttypes;3.Inspectreferencecyclesandlong-livedreferencesusingobjgraph.show_backrefsandcheckforuncollectedcycles;4.Usememory_prof

如何使用檢查約束來在MySQL中執行數據規則? 如何使用檢查約束來在MySQL中執行數據規則? Aug 06, 2025 pm 04:49 PM

MySQL支持CHECK約束以強制域完整性,自8.0.16版本起生效;1.創建表時添加約束:使用CREATETABLE定義CHECK條件,如年齡≥18、薪資>0、部門限定值;2.修改表添加約束:用ALTERTABLEADDCONSTRAINT限製字段值,如姓名非空;3.使用複雜條件:支持多列邏輯和表達式,如結束日期≥開始日期且完成狀態需有結束日期;4.刪除約束:通過ALTERTABLEDROPCONSTRAINT指定名稱刪除;5.注意事項:需MySQL8.0.16 、InnoDB或MyISAM引

如何在Python類中實現自定義迭代器? 如何在Python類中實現自定義迭代器? Aug 06, 2025 pm 01:17 PM

Define__iter__()toreturntheiteratorobject,typicallyselforaseparateiteratorinstance.2.Define__next__()toreturnthenextvalueandraiseStopIterationwhenexhausted.Tocreateareusablecustomiterator,managestatewithin__iter__()oruseaseparateiteratorclass,ensurin

優化客戶關係管理(CRM)的MySQL 優化客戶關係管理(CRM)的MySQL Aug 06, 2025 pm 06:04 PM

TooptimizeMySQLperformanceforaCRMsystem,focusonindexingstrategies,schemadesignbalance,andqueryefficiency.1)Useproperindexingbyanalyzingfrequentqueries,addingindexesonWHEREclausecolumns,JOINkeys,andORDERBYfields,andconsideringcompositeindexeswheremult

如何在Python中打印出JSON文件? 如何在Python中打印出JSON文件? Aug 07, 2025 pm 12:10 PM

要美化打印JSON文件,需使用json模塊的indent參數,具體步驟為:1.使用json.load()讀取JSON文件數據;2.使用json.dump()並將indent設為4或2寫入新文件,即可生成格式化後的JSON文件,完成美化打印。

如何在MySQL中顯示所有數據庫 如何在MySQL中顯示所有數據庫 Aug 08, 2025 am 09:50 AM

要顯示MySQL中的所有數據庫,需使用SHOWDATABASES命令;1.登錄MySQL服務器後執行SHOWDATABASES;命令即可列出當前用戶有權訪問的所有數據庫;2.系統數據庫如information_schema、mysql、performance_schema和sys默認存在,但權限不足的用戶可能無法看到;3.也可通過SELECTSCHEMA_NAMEFROMinformation_schema.SCHEMATA;查詢並篩選數據庫,例如排除系統數據庫以僅顯示用戶創建的數據庫;確保使用

如何解決MySQL中的'連接太多”錯誤 如何解決MySQL中的'連接太多”錯誤 Aug 08, 2025 am 11:37 AM

Checkmax_connectionsandThreads_connectedtoconfirmthelimitandcurrentusage.2.Increasemax_connectionstemporarilywithSETGLOBALorpermanentlyviamy.cnf/my.ini.3.Useconnectionpooling,closeconnectionsexplicitly,andmanagepersistentconnectionsintheapplication.4

See all articles