1. pandarallel (pip install )
对于一个带有Pandas DataFrame df的简单用例和一个应用func的函数,只需用parallel_apply替换经典的apply。
from pandarallel import pandarallel # Initialization pandarallel.initialize() # Standard pandas apply df.apply(func) # Parallel apply df.parallel_apply(func)
注意,如果不想并行化计算,仍然可以使用经典的apply方法。
另外可以通过在initialize函数中传递progress_bar=True来显示每个工作CPU的一个进度条。
2. joblib (pip install )
https://pypi.python.org/pypi/joblib
# Embarrassingly parallel helper: to make it easy to write readable parallel code and debug it quickly from math import sqrt from joblib import Parallel, delayed def test(): start = time.time() result1 = Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10000)) end = time.time() print(end-start) result2 = Parallel(n_jobs=8)(delayed(sqrt)(i**2) for i in range(10000)) end2 = time.time() print(end2-end)
-------输出结果----------
0.4434356689453125
0.6346755027770996
3. multiprocessing
import multiprocessing as mp with mp.Pool(mp.cpu_count()) as pool: df['newcol'] = pool.map(f, df['col']) multiprocessing.cpu_count()
返回系统的CPU数量。
该数量不同于当前进程可以使用的CPU数量。可用的CPU数量可以由 len(os.sched_getaffinity(0)) 方法获得。
可能引发 NotImplementedError 。
参见os.cpu_count()
4. 几种方法性能比较
(1)代码
import sys import time import pandas as pd import multiprocessing as mp from joblib import Parallel, delayed from pandarallel import pandarallel from tqdm import tqdm, tqdm_notebook def get_url_len(url): url_list = url.split(".") time.sleep(0.01) # 休眠0.01秒 return len(url_list) def test1(data): """ 不进行任何优化 """ start = time.time() data['len'] = data['url'].apply(get_url_len) end = time.time() cost_time = end - start res = sum(data['len']) print("res:{}, cost time:{}".format(res, cost_time)) def test_mp(data): """ 采用mp优化 """ start = time.time() with mp.Pool(mp.cpu_count()) as pool: data['len'] = pool.map(get_url_len, data['url']) end = time.time() cost_time = end - start res = sum(data['len']) print("test_mp \t res:{}, cost time:{}".format(res, cost_time)) def test_pandarallel(data): """ 采用pandarallel优化 """ start = time.time() pandarallel.initialize() data['len'] = data['url'].parallel_apply(get_url_len) end = time.time() cost_time = end - start res = sum(data['len']) print("test_pandarallel \t res:{}, cost time:{}".format(res, cost_time)) def test_delayed(data): """ 采用delayed优化 """ def key_func(subset): subset["len"] = subset["url"].apply(get_url_len) return subset start = time.time() data_grouped = data.groupby(data.index) # data_grouped 是一个可迭代的对象,那么就可以使用 tqdm 来可视化进度条 results = Parallel(n_jobs=8)(delayed(key_func)(group) for name, group in tqdm(data_grouped)) data = pd.concat(results) end = time.time() cost_time = end - start res = sum(data['len']) print("test_delayed \t res:{}, cost time:{}".format(res, cost_time)) if __name__ == '__main__': columns = ['title', 'url', 'pub_old', 'pub_new'] temp = pd.read_csv("./input.csv", names=columns, nrows=10000) data = temp """ for i in range(99): data = data.append(temp) """ print(len(data)) """ test1(data) test_mp(data) test_pandarallel(data) """ test_delayed(data)
(2) 结果输出
1k
res:4338, cost time:0.0018074512481689453
test_mp res:4338, cost time:0.2626469135284424
test_pandarallel res:4338, cost time:0.3467681407928467
1w
res:42936, cost time:0.008773326873779297
test_mp res:42936, cost time:0.26111721992492676
test_pandarallel res:42936, cost time:0.33237743377685547
10w
res:426742, cost time:0.07944369316101074
test_mp res:426742, cost time:0.294996976852417
test_pandarallel res:426742, cost time:0.39208269119262695
100w
res:4267420, cost time:0.8074917793273926
test_mp res:4267420, cost time:0.9741342067718506
test_pandarallel res:4267420, cost time:0.6779992580413818
1000w
res:42674200, cost time:8.027287006378174
test_mp res:42674200, cost time:7.751036882400513
test_pandarallel res:42674200, cost time:4.404983282089233
在get_url_len函数里加个sleep语句(模拟复杂逻辑),数据量为1k,运行结果如下:
1k
res:4338, cost time:10.054503679275513
test_mp res:4338, cost time:0.35697126388549805
test_pandarallel res:4338, cost time:0.43415403366088867
test_delayed res:4338, cost time:2.294757843017578
5. 小结
(1)如果数据量比较少,并行处理比单次执行效率更慢;
(2)如果apply的函数逻辑简单,并行处理比单次执行效率更慢。
6. 问题及解决方法
(1)ImportError: This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770.
https://www.jianshu.com/p/0be1b4b27bde
(2)Linux查看物理CPU个数、核数、逻辑CPU个数
https://lover.blog.csdn.net/article/details/113951192
(3) 进度条的使用
https://www.jb51.net/article/206219.htm
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
RTX 5090要首发 性能要翻倍!三星展示GDDR7显存
三星在GTC上展示了专为下一代游戏GPU设计的GDDR7内存。
首次推出的GDDR7内存模块密度为16GB,每个模块容量为2GB。其速度预设为32 Gbps(PAM3),但也可以降至28 Gbps,以提高产量和初始阶段的整体性能和成本效益。
据三星表示,GDDR7内存的能效将提高20%,同时工作电压仅为1.1V,低于标准的1.2V。通过采用更新的封装材料和优化的电路设计,使得在高速运行时的发热量降低,GDDR7的热阻比GDDR6降低了70%。
更新日志
- 小骆驼-《草原狼2(蓝光CD)》[原抓WAV+CUE]
- 群星《欢迎来到我身边 电影原声专辑》[320K/MP3][105.02MB]
- 群星《欢迎来到我身边 电影原声专辑》[FLAC/分轨][480.9MB]
- 雷婷《梦里蓝天HQⅡ》 2023头版限量编号低速原抓[WAV+CUE][463M]
- 群星《2024好听新歌42》AI调整音效【WAV分轨】
- 王思雨-《思念陪着鸿雁飞》WAV
- 王思雨《喜马拉雅HQ》头版限量编号[WAV+CUE]
- 李健《无时无刻》[WAV+CUE][590M]
- 陈奕迅《酝酿》[WAV分轨][502M]
- 卓依婷《化蝶》2CD[WAV+CUE][1.1G]
- 群星《吉他王(黑胶CD)》[WAV+CUE]
- 齐秦《穿乐(穿越)》[WAV+CUE]
- 发烧珍品《数位CD音响测试-动向效果(九)》【WAV+CUE】
- 邝美云《邝美云精装歌集》[DSF][1.6G]
- 吕方《爱一回伤一回》[WAV+CUE][454M]