机器学习模型的性能度量(PerMetrics)框架
permetrics的Python项目详细描述
人工智能模型的性能度量(PerMetrics)框架
"Knowledge is power, sharing it is the premise of progress in life. It seems like a burden to someone, but it is the only way to achieve immortality." --- Thieu Nguyen
简介
- 在
PerMetrics是一个python库,用于机器学习模型的性能度量。在
在 - 在
该框架的目标是:
- 结合回归、分类和聚类模型的所有指标
- 帮助所有领域的用户尽快访问指标
- 对模型进行定性分析。在
- 对模型进行定量分析。在
- 在
指标
在
Problem | STT | Metric | Metric Fullname | Characteristics |
---|---|---|---|---|
Regression | 1 | EVS | Explained Variance Score | Larger is better (Best = 1) |
2 | ME | Max Error | Smaller is better (Best = 0) | |
3 | MAE | Mean Absolute Error | Smaller is better (Best = 0) | |
4 | MSE | Mean Squared Error | Smaller is better (Best = 0) | |
5 | RMSE | Root Mean Squared Error | Smaller is better (Best = 0) | |
6 | MSLE | Mean Squared Log Error | Smaller is better (Best = 0) | |
7 | MedAE | Median Absolute Error | Smaller is better (Best = 0) | |
8 | MRE | Mean Relative Error | Smaller is better (Best = 0) | |
9 | MAPE | Mean Absolute Percentage Error | Smaller is better (Best = 0) | |
10 | SMAPE | Symmetric Mean Absolute Percentage Error | Smaller is better (Best = 0) | |
11 | MAAPE | Mean Arctangent Absolute Percentage Error | Smaller is better (Best = 0) | |
12 | MASE | Mean Absolute Scaled Error | Smaller is better (Best = 0) | |
13 | NSE | Nash-Sutcliffe Efficiency Coefficient | Larger is better (Best = 1) | |
14 | WI | Willmott Index | Larger is better (Best = 1) | |
15 | R | Pearson’s Correlation Index | Larger is better (Best = 1) | |
16 | CI | Confidence Index | Larger is better (Best = 1) | |
17 | R2 | Coefficient of Determination | Larger is better (Best = 1) | |
18 | R2s | (Pearson’s Correlation Index) ^ 2 | Larger is better (Best = 1) | |
19 | DRV | Deviation of Runoff Volume | Smaller is better (Best = 0) | |
20 | KGE | Kling-Gupta Efficiency | Larger is better (Best = 1) | |
21 | ||||
Single Loss | 1 | RE | Relative error | Smaller is better (Best = 0) |
2 | AE | Absolute error | Smaller is better (Best = 0) | |
3 | SE | Squared error | Smaller is better (Best = 0) | |
4 | SLE | Squared log error | Smaller is better (Best = 0) | |
5 | LL | Log likelihood | Smaller is better (Best = 0) | |
6 | ||||
Classification | 1 | MLL | Mean Log Likelihood | Smaller is better (Best = 0) |
2 | ||||
Clustering | 1 | |||
2 |
依赖性
- Python(>;=3.6)
- 数量(>;=1.15.1)
用户安装
pip install permetrics
或者从GitHub安装开发版本:
^{pr2}$示例
- 你需要做的就是:(确保你的y嫒true和y峎pred是一个numpy数组)
* Simple example: ## For example with RMSE: from numpy import array from permetrics.regression import Metrics ## For 1-D array y_true = array([3, -0.5, 2, 7]) y_pred = array([2.5, 0.0, 2, 8]) obj1 = Metrics(y_true, y_pred) print(obj1.rmse_func(clean=True, decimal=5)) ## For > 1-D array y_true = array([[0.5, 1], [-1, 1], [7, -6]]) y_pred = array([[0, 2], [-1, 2], [8, -5]]) multi_outputs = [None, "raw_values", [0.3, 1.2], array([0.5, 0.2]), (0.1, 0.9)] obj2 = Metrics(y_true, y_pred) for multi_output in multi_outputs: print(obj2.rmse_func(clean=False, multi_output=multi_output, decimal=5)) * Or run the simple: python examples/RMSE.py * The more complicated tests in the folder: examples
documentation包含更详细的安装说明和说明。在
变更日志
- 请参阅ChangeLog.md了解置换术的显著变化历史。在
重要链接
- 在 在
- 在 在
- 在 在
- 在 在
- 在
这个项目也与我的另一个项目有关,这是“元启发式”和“神经网络”,检查一下
在
捐款
引文
- 如果您在项目中使用permetrics,请引用我的作品:
@software{thieu_nguyen_2020_3951205, author = {Thieu Nguyen}, title = {A framework of PERformance METRICS (PerMetrics) for artificial intelligence models}, month = jul, year = 2020, publisher = {Zenodo}, doi = {10.5281/zenodo.3951205}, url = {https://doi.org/10.5281/zenodo.3951205} } @article{nguyen2019efficient, title={Efficient Time-Series Forecasting Using Neural Network and Opposition-Based Coral Reefs Optimization}, author={Nguyen, Thieu and Nguyen, Tu and Nguyen, Binh Minh and Nguyen, Giang}, journal={International Journal of Computational Intelligence Systems}, volume={12}, number={2}, pages={1144--1161}, year={2019}, publisher={Atlantis Press} }
未来工作
分类
- F1得分
- 多类测井损失
- 电梯
- 二值分类的平均精度
- 精确性/召回盈亏平衡点
- 交叉熵
- 真阳性/假阳性/真阴性/假阴性率
- 精确性/召回性/敏感性/特异性
- 相互信息
处理
- 分组/减少
- 称重单个样本或组
属性指标可以具有
- 最小值或最大值(通过最小化或最大化进行优化)
- 二元分类
- 分数预测类标签
- 分数预测排名(最有可能出现在一个班级中的可能性最小)
- 得分预测概率
- 多类分类
- 分数预测类标签
- 得分预测概率
- 回归(更多)
- 离散评分者比较(混淆矩阵)
- 项目
标签: