集成学习模型对比优化—银行业务

1.Data Understanding¶

2.Data Exploration

3.Data Preparation

4.Training Models

5.Optimization Model

集成学习模型对比优化—银行业务

1.Data Understanding¶

import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
df = pd.read_csv("D:\\课程学习\\机器学习\\银行客户开设定期存款账户情况预测\\banking.csv")
#Print the shape of the DataFrame
print("1.the shape of the DataFrame")
print(df.shape)

在这里插入图片描述

# Print the head of the DataFrame
print("2.the head of the DataFrame")
print(df.head())

在这里插入图片描述

# Print info of the DataFrame
print("3.the info of the DataFrame")
print(df.info())

在这里插入图片描述

# Print statistical description of the DataFrame
print("4.the statistical description of the DataFrame")
print(df.describe())

在这里插入图片描述

# Check for any null values in the DataFrame
print("5.Check for any null values in the DataFrame")
datacheck = df.isnull().any()
print(datacheck)

在这里插入图片描述

# Check for duplicates
print("6.Check for duplicates")
duplicates = df.duplicated()
print(f"Number of duplicated rows: {duplicates.sum()}")

在这里插入图片描述

print("7.See the duplicated rows")
# See the duplicated rows:
if duplicates.sum() > 0:print("\nDuplicated Rows:")print(df[duplicates])

在这里插入图片描述

#pick out the non_numeric_columns
non_numeric_columns = df.select_dtypes(exclude=['number']).columns.to_list()
numeric_columns = df.select_dtypes(include=['number']).columns
print(non_numeric_columns)

2.Data Exploration

#statistics description of continous variable
print("1.statistics description of continous variable")
df.describe()

在这里插入图片描述

print("2.Check the distribution of labels")
print(df['y'].value_counts())

在这里插入图片描述

# histograms of continuous variables
print("3.histograms of continuous variables")
fig, axes = plt.subplots(nrows=len(df.select_dtypes(include=['int64', 'float64']).columns), figsize=(6, 3*len(df.select_dtypes(include=['int64', 'float64']).columns)))
df.select_dtypes(include=['int64', 'float64']).hist(ax=axes,grid=False)
plt.tight_layout()
plt.show()

在这里插入图片描述

# histogram of categorical variables
print("4.plt the histogram of categorical variables")
#categorical_features = non_numeric_columns = df.select_dtypes(exclude=['number']).columns.to_list()
categorical_features = ["marital", "default", "housing","loan","contact","poutcome","y"]
fig, ax = plt.subplots(1, len(categorical_features), figsize=(25,3))for i, categorical_feature in enumerate(df[categorical_features]):df[categorical_feature].value_counts().plot(kind="bar", ax=ax[i], rot=0).set_title(categorical_feature)plt.tight_layout()
plt.show()

在这里插入图片描述

print("5.Check for high correlations")
import numpy as np# Check for high correlations
correlation_matrix = df.corr().abs()# Get pairs of highly correlated features
high_corr_var = np.where(correlation_matrix > 0.6)
high_corr_var = [(correlation_matrix.columns[x], correlation_matrix.columns[y])for x, y in zip(*high_corr_var) if x != y and x < y]
print("Highly correlated pairs:", high_corr_var)# Filter pairs by correlation threshold
threshold = 0.7
high_corr_pairs = {}
for column in correlation_matrix.columns:for index in correlation_matrix.index:# We'll only consider pairs of different columns and correlations above the thresholdif column != index and abs(correlation_matrix[column][index]) > threshold:# We'll also ensure we don't duplicate pairs (i.e., A-B and B-A)sorted_pair = tuple(sorted([column, index]))if sorted_pair not in high_corr_pairs:high_corr_pairs[sorted_pair] = correlation_matrix[column][index]# Display the high correlation pairs
for pair, corr_value in high_corr_pairs.items():print(f"Correlation between {pair[0]} and {pair[1]}: {corr_value:.2f}")

在这里插入图片描述

#Correlation Heatmap
print("6.Correlation Heatmap")
import seaborn as sns
df_duplicate = df.copy()
df_duplicate['y'] = df['y'].map({'no': 0, 'yes': 1})
correlation_matrix2 = df_duplicate.corr().abs()
plt.figure(figsize=(12, 10))
sns.heatmap(correlation_matrix2, cmap='Greens', annot=True)
plt.show()

在这里插入图片描述

# Plotting count plots for each categorical variable with respect to the 'y' column
print("Plotting count plots for each categorical variable with respect to the 'y' column")
categorical_cols = df.select_dtypes(include=['object']).columns.tolist()
print(categorical_cols)
#categorical_cols.remove('y')# Setting up the figure size
plt.figure(figsize=(20, 20))for i, col in enumerate(categorical_cols, 1):plt.subplot(4, 3, i)sns.countplot(data=df, x=col, hue='y', order=df[col].value_counts().index)plt.title(f'Distribution of {col} by y')plt.xticks(rotation=45)plt.tight_layout()plt.show()

在这里插入图片描述

# Distribution of y by Month
print("Distribution of y by Month")
plt.figure(figsize=(11, 5))
sns.countplot(data=df, x='month', hue='y', order=['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec'])
plt.title('Distribution of y by Month')
plt.ylabel('Count')
plt.xlabel('Month')
plt.tight_layout()
plt.show()

在这里插入图片描述

3.Data Preparation

import pandas as pd
from sklearn.preprocessing import LabelEncoder# Load the dataset and drop duplicates
df = pd.read_csv("D:\\课程学习\\机器学习\\银行客户开设定期存款账户情况预测\\banking.csv").drop_duplicates()
# Convert the 'y' column to dummy variables
#y = pd.get_dummies(df['y'], drop_first=True)
y = df.iloc[:, -1]# Process client-related data
bank_client = df.iloc[:, 0:7]
labelencoder_X = LabelEncoder()
columns_to_encode = ['job', 'marital', 'education', 'default', 'housing', 'loan']
for col in columns_to_encode:bank_client[col] = labelencoder_X.fit_transform(bank_client[col])# Process bank-related data
bank_related = df.iloc[:, 7:11]
#columns_to_encode=df.select_dtypes(include=['object']).columns.tolist()
columns_to_encode = ['contact', 'month', 'day_of_week']
for col in columns_to_encode:bank_related[col] = labelencoder_X.fit_transform(bank_related[col])# Process sociol & economic data (se) and other bank data (others)
bank_se = df.loc[:, ['emp_var_rate', 'cons_price_idx', 'cons_conf_idx', 'euribor3m', 'nr_employed']]
bank_others = df.loc[:, ['campaign', 'pdays', 'previous', 'poutcome']]
bank_others['poutcome'].replace(['nonexistent', 'failure', 'success'], [1, 2, 3], inplace=True)# Concatenate all the processed parts, reorder columns, and save to CSV
bank_final = pd.concat([bank_client, bank_related,bank_others, bank_se,y], axis=1)
columns_order = ['age', 'job', 'marital', 'education', 'default', 'housing', 'loan', 'contact', 'month','day_of_week', 'duration', 'campaign', 'pdays', 'previous', 'poutcome', 'emp_var_rate','cons_price_idx', 'cons_conf_idx', 'euribor3m', 'nr_employed', 'y']bank_final = bank_final[columns_order]bank_final.to_csv('bank_final.csv', index=False)# Display basic information about the final dataframe
print(bank_final.shape)
print(bank_final.head())
print(bank_final.info())
print(bank_final.describe())# Check for any null values in the DataFrame
datacheck = bank_final.isnull().any()
print(datacheck)count = bank_final['y'].value_counts()
print(count)

在这里插入图片描述

4.Training Models

import warnings
warnings.filterwarnings("ignore")
import pandas as pd
from sklearn.model_selection import train_test_split  x = bank_final.drop('y', axis=1)
y = bank_final['y']# Split bank_final into training and testing sets (80%-20%)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)print(X_train.shape)
print(X_test.shape)

import numpy as np 
import pandas as pd 
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import cohen_kappa_score#计算评价指标:用df_eval数据框装起来计算的评价指标数值
def evaluation(y_test, y_predict):accuracy=classification_report(y_test, y_predict,output_dict=True)['accuracy']s=classification_report(y_test, y_predict,output_dict=True)['weighted avg']precision=s['precision']recall=s['recall']f1_score=s['f1-score']#kappa=cohen_kappa_score(y_test, y_predict)return accuracy,precision,recall,f1_scoreimport matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score, roc_curve,aucdef roc(y_test,y_proba):gbm_auc = roc_auc_score(y_test, y_proba[:, 1])  # 计算aucgbm_fpr, gbm_tpr, gbm_threasholds = roc_curve(y_test, y_proba[:, 1])  # 计算ROC的值return gbm_auc,gbm_fpr, gbm_tpr
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
from sklearn.model_selection import cross_val_score
import time
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifiermodel0 = SVC(kernel="rbf", random_state=77,probability=True)
model1 = MLPClassifier(hidden_layer_sizes=(16,8), random_state=77, max_iter=10000)
model2 = LogisticRegression()
model3 = RandomForestClassifier()
model4 = AdaBoostClassifier()
model5 = GradientBoostingClassifier()
model6 = XGBClassifier()
model7 = LGBMClassifier()model_list=[model1,model2,model3,model4,model5,model6,model7]
model_name=['BP','Logistic Regression', 'Random Forest', 'AdaBoost', 'GBDT', 'XGBoost','LightGBM']df_eval=pd.DataFrame(columns=['Accuracy','Precision','Recall','F1_score','Running_time'])
plt.figure(figsize=(9, 7))
plt.title("ROC")
plt.plot([0,1],[0,1],'r--')
plt.xlim([0,1])
plt.ylim([0,1])
plt.xlabel('false positive rate')  # specificity = 1 - np.array(gbm_fpr))
plt.ylabel('true positive rate')  # sensitivity = gbm_tpr
for i in range(7):model_C=model_list[i]name=model_name[i]start = time.time()   model_C.fit(X_train, y_train)end = time.time()running_time = end - startpred=model_C.predict(X_test)pred_proba = model_C.predict_proba(X_test) #s=classification_report(y_test, pred)s=evaluation(y_test,pred)s=list(s)s.append(running_time)df_eval.loc[name,:]=sgbm_auc,gbm_fpr, gbm_tpr=roc(y_test,pred_proba)plt.plot(list(np.array(gbm_fpr)), gbm_tpr, label="%s(AUC=%.4f)"% (name, gbm_auc))print(df_eval)plt.legend(loc='upper right')
plt.show()


在这里插入图片描述

# Confusion matrix graph
from lightgbm import LGBMClassifier
from sklearn.metrics import confusion_matrixmodel=LGBMClassifier()
model.fit(X_train, y_train)   
pred=model.predict(X_test)
Confusion_Matrixn = confusion_matrix(y_test, pred)
Confusion_Matrix = confusion_matrix(y_test, pred,normalize='true')
print(Confusion_Matrixn)from sklearn.metrics import ConfusionMatrixDisplay#one way
disp = ConfusionMatrixDisplay(confusion_matrix=Confusion_Matrix)#display_labels=LR_model.classes_
disp.plot(cmap='Greens')#supported values are 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'crest', 'crest_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'flare', 'flare_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'icefire', 'icefire_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'mako', 'mako_r', 'nipy_spectral', 'nipy_spectral_r', 'ocean', 'ocean_r', 'pink', 'pink_r', 'plasma', 'plasma_r', 'prism', 'prism_r', 'rainbow', 'rainbow_r', 'rocket', 'rocket_r', 'seismic', 'seismic_r', 'spring', 'spring_r', 'summer', 'summer_r', 'tab10', 'tab10_r', 'tab20', 'tab20_r', 'tab20b', 'tab20b_r', 'tab20c', 'tab20c_r', 'terrain', 'terrain_r', 'turbo', 'turbo_r', 'twilight', 'twilight_r', 'twilight_shifted', 'twilight_shifted_r', 'viridis', 'viridis_r', 'vlag', 'vlag_r', 'winter', 'winter_r'#disp.color='blue'
plt.show()

在这里插入图片描述

5.Optimization Model

#Train Test split with Undersampling to bring 0 to 10,000 and Oversampling to bring 1 to 10,000
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
from sklearn.preprocessing import StandardScaler
# Apply SMOTE to oversample class 1 to 10,000 samples
smote_strategy = {1: 10000}
smote = SMOTE(sampling_strategy=smote_strategy, random_state=42)
x_train_resampled, y_train_resampled = smote.fit_resample(X_train, y_train)# Apply RandomUnderSampler to undersample class 0 to 10,000 samples
undersample_strategy = {0: 10000}
undersampler = RandomUnderSampler(sampling_strategy=undersample_strategy, random_state=42)
x_train_resampled, y_train_resampled = undersampler.fit_resample(x_train_resampled, y_train_resampled)# Check the new class distribution
print(y_train_resampled.value_counts())

在这里插入图片描述

# Check for NaN values in x_train_resampled after resampling
nan_values = x_train_resampled.isnull().sum()
print("\nNumber of NaN values in each column:")
print(nan_values)

在这里插入图片描述

# If there are any NaN values, print the columns that contain them
nan_columns = nan_values[nan_values > 0].index.tolist()
if nan_columns:print("\nColumns with NaN values:", nan_columns)
else:print("\nThere are no NaN values in the resampled data.")

在这里插入图片描述

# Before resampling
print(y_train.value_counts())# After resampling
print(y_train_resampled.value_counts())

在这里插入图片描述

from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
from sklearn.model_selection import cross_val_score
import time
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifiermodel0 = SVC(kernel="rbf", random_state=77,probability=True)
model1 = MLPClassifier(hidden_layer_sizes=(16,8), random_state=77, max_iter=10000)
model2 = LogisticRegression()
model3 = RandomForestClassifier()
model4 = AdaBoostClassifier()
model5 = GradientBoostingClassifier()
model6 = XGBClassifier()
model7 = LGBMClassifier()model_list=[model1,model2,model3,model4,model5,model6,model7]
model_name=['BP','Logistic Regression', 'Random Forest', 'AdaBoost', 'GBDT', 'XGBoost','LightGBM']df_eval=pd.DataFrame(columns=['Accuracy','Precision','Recall','F1_score','Running_time'])
plt.figure(figsize=(9, 7))
plt.title("ROC")
plt.plot([0,1],[0,1],'r--')
plt.xlim([0,1])
plt.ylim([0,1])
plt.xlabel('false positive rate')  # specificity = 1 - np.array(gbm_fpr))
plt.ylabel('true positive rate')  # sensitivity = gbm_tpr
for i in range(7):model_C=model_list[i]name=model_name[i]start = time.time()   model_C.fit(x_train_resampled, y_train_resampled)end = time.time()running_time = end - startpred=model_C.predict(X_test)pred_proba = model_C.predict_proba(X_test) #s=classification_report(y_test, pred)s=evaluation(y_test,pred)s=list(s)s.append(running_time)df_eval.loc[name,:]=sgbm_auc,gbm_fpr, gbm_tpr=roc(y_test,pred_proba)plt.plot(list(np.array(gbm_fpr)), gbm_tpr, label="%s(AUC=%.4f)"% (name, gbm_auc))print(df_eval)
plt.legend(loc='upper right')
plt.show()

在这里插入图片描述
在这里插入图片描述

# Confusion matrix graph
from lightgbm import LGBMClassifier
from xgboost import XGBClassifier
from sklearn.metrics import confusion_matrixmodel=XGBClassifier()
model.fit(x_train_resampled, y_train_resampled)   
pred=model.predict(X_test)Confusion_Matrixn = confusion_matrix(y_test, pred)
Confusion_Matrix = confusion_matrix(y_test, pred,normalize='true')
print(Confusion_Matrixn)from sklearn.metrics import ConfusionMatrixDisplay#one way
disp = ConfusionMatrixDisplay(confusion_matrix=Confusion_Matrix)#display_labels=LR_model.classes_
disp.plot(cmap='Greens')#supported values are 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r', 'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r', 'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr', 'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn', 'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn', 'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r', 'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r', 'crest', 'crest_r', 'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'flare', 'flare_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar', 'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r', 'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'icefire', 'icefire_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'mako', 'mako_r', 'nipy_spectral', 'nipy_spectral_r', 'ocean', 'ocean_r', 'pink', 'pink_r', 'plasma', 'plasma_r', 'prism', 'prism_r', 'rainbow', 'rainbow_r', 'rocket', 'rocket_r', 'seismic', 'seismic_r', 'spring', 'spring_r', 'summer', 'summer_r', 'tab10', 'tab10_r', 'tab20', 'tab20_r', 'tab20b', 'tab20b_r', 'tab20c', 'tab20c_r', 'terrain', 'terrain_r', 'turbo', 'turbo_r', 'twilight', 'twilight_r', 'twilight_shifted', 'twilight_shifted_r', 'viridis', 'viridis_r', 'vlag', 'vlag_r', 'winter', 'winter_r'#disp.color='blue'
plt.show()

在这里插入图片描述
数据集来源:https://www.heywhale.com/home/user/profile/6535165d9217caa11b5ee5b3/overview

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/349300.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

如何通过 6 种方法从 iPhone 恢复已删除的文件

想知道如何从 iPhone 恢复已删除的文件吗&#xff1f;本文将指导您如何从 iPhone 恢复数据&#xff0c;无论您是否有 iTunes/iCloud 备份。 iPhone 上已删除的文件去哪儿了&#xff1f; 许多 iPhone 用户抱怨他们经常丢失 iPhone 上的一些重要文件。由于意外删除、iOS 更新失败…

安装golang

官网:All releases - The Go Programming Language (google.cn) 下载对应的版本安装即可

C++ 34 之 单例模式

#include <iostream> #include <string.h> using namespace std;class King{// 公共的函数&#xff0c;为了让外部可以获取唯一的实例 public:// getInstance 获取单例 约定俗成static King* getInstance(){return true_king;}private: // 私有化// 构造函数设置为…

2024 Idea最新激活码

idea的激活与安装 操作如下&#xff1a; ① 打开网站&#xff1a;https://web.52shizhan.cn 切换到&#xff1a;激活码&#xff0c;点击获取 ② 这个时候就跳转到现成账号页面&#xff0c;点击获取体验号&#xff0c;如图 ③ 来到了获取现成账号的页面了。输入你的邮箱账号即…

VRChat 2024年裁员原因与背景深度分析

VRChat&#xff0c;作为2022年元宇宙/VR社交领域的巨头&#xff0c;近期在2024年宣布裁员计划&#xff0c;其背后原因和背景值得业界尤其是仍在纯元宇宙虚拟空间创业的同仁们重点关注。 一、创始人决策失误 根据CEO的邮件披露&#xff0c;VRChat的创始人因缺乏经验和过度自信…

最新区块链论文速读--CCF A会议 ICSE 2024 共13篇 附pdf下载 (2/2)

Conference&#xff1a;International Conference on Software Engineering (ICSE) CCF level&#xff1a;CCF A Categories&#xff1a;Software Engineering/System Software/Programming Languages Year&#xff1a;2024 Num&#xff1a;13 第1~7篇区块链文章请点击此处…

大语言模型 (LLM) 红队测试:提前解决模型漏洞

大型语言模型 (LLM) 的兴起具有变革性&#xff0c;以其在自然语言处理和生成方面具有与人类相似的卓越能力&#xff0c;展现出巨大的潜力。然而&#xff0c;LLM 也被发现存在偏见、提供错误信息或幻觉、生成有害内容&#xff0c;甚至进行欺骗行为的情况。一些备受关注的事件包括…

什么充电宝牌子好用耐用?高评分绿联、西圣、罗马仕充电宝实测

在如今这个科技飞速发展的时代&#xff0c;充电宝已经成为我们日常生活中不可或缺的电子配件之一。当我们外出旅行、办公或者日常出行时&#xff0c;充电宝能够为我们的电子设备及时补充电量&#xff0c;让我们时刻保持在线。然而&#xff0c;面对市场上众多的充电宝品牌和型号…

小程序无法调用服务端问题排查

1、问题描述 突然有一天线上的小程序不能登录&#xff0c;经查小程序无法调用。经查无法小程序页面无法调用后台服务。 2、排查过程 由于无法登录小程序发布服务器&#xff0c;无法测试小程序前端服务器到服务端网络&#xff0c;并且小程序无法看到日志。所以就得从服务端和网…

idea有这个类却报红,无法用快捷键找到

idea有这个类却报红&#xff0c;无法用快捷键找到&#xff0c;但是项目启动却没有任何问题&#xff0c;严重影响到了开发效率&#xff0c;关idea 重新打开没有用。 找了一圈&#xff0c;办法如下&#xff1a; 1、点击左上角的 File—>Invalidate Caches/Restar 2、点击 In…

Vue CLI 4与项目构建实战指南

title: Vue CLI 4与项目构建实战指南 date: 2024/6/9 updated: 2024/6/9 excerpt: 这篇文章介绍了如何使用Vue CLI优化项目构建配置&#xff0c;提高开发效率&#xff0c;涉及配置管理、项目部署策略、插件系统定制以及Webpack和TypeScript的深度集成技巧。 categories: 前端…

ubuntu下使用cmake编译opencv4.8.0+ffmpeg4.2.2+cuda11.1

1.源码下载 &#xff08;1&#xff09;下载ffmpeg4.2.2、opencv4.8.0源码&#xff0c;这里提供一个百度网盘地址&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/1pBksr0_RtKL0cM6Gsf2MGA?pwdcyai 提取码&#xff1a;cyai &#xff08;2&#xff09;解压所有文件 例…

【自动部署】4.阿里云ECS服务器 IDEA自动部署项目

如何在IDEA中&#xff0c;自动部署项目到阿里云ECS服务器&#xff1f;今天我们就来实现一键部署功能。 执行maven命令打包免密登录&#xff0c;在IEDA中连接服务器执行stop脚本上传jar包到服务器执行start脚本查看运行日志 1.安装Alibaba Cloud Toolkit 2.配置host 3.自动化部…

智能合约漏洞类型

Are We There Yet? Unraveling the State-of-the-Art Smart Contract Fuzzers | Proceedings of the IEEE/ACM 46th International Conference on Software Engineering

HTML静态网页成品作业(HTML+CSS)—— 家乡山西介绍网页(3个页面)

&#x1f389;不定期分享源码&#xff0c;关注不丢失哦 文章目录 一、作品介绍二、作品演示三、代码目录四、网站代码HTML部分代码 五、源码获取 一、作品介绍 &#x1f3f7;️本套采用HTMLCSS&#xff0c;未使用Javacsript代码&#xff0c;共有6个页面。 二、作品演示 三、代…

【嵌入式DIY实例】-Nokia 5110显示DHT11/DHT22传感器数据

Nokia 5110显示DHT11/DHT22传感器数据 文章目录 Nokia 5110显示DHT11/DHT22传感器数据1、硬件准备2、代码实现2.1 显示DHT11数据2.2 显示DHT22数据本文介绍如何将 ESP8266 NodeMCU 开发板 (ESP-12E) 与 DHT11 数字湿度和温度传感器以及诺基亚 5110 LCD 连接。 NodeMCU 从 DHT11…

AI绘画SD下载安装教程,学习AI绘画软件必看(SD怎么安装,SD安装教程,安装stable diffusion软件必看)

大家好&#xff0c;我是设计师阿威 最近很火很有趋势的便是AI人工智能了&#xff0c;提到AI大家肯定都不陌生&#xff08;AIGC&#xff09;大家也很熟知&#xff0c;但是要问应用的工具有哪些肯定很多人说不出来几个&#xff0c;但是比较厉害的就是大众所认识的SD-stable diff…

【学习笔记】MySQL(Ⅱ)

MySQL(Ⅱ) 7、 进阶篇 —— 存储引擎 7.1、MySQL 体系结构 7.2、存储引擎 7.2.1 InnoDB 7.2.2 MyISAM 7.2.3 Memory 7.2.4 InnoDB、MyISAM、Memory 的比较8、 拓展篇 —— 在 Linux 上安装数据库9、进阶篇 —— 索引 …

【产品经理】ERP对接电商平台

电商ERP对接上游平台&#xff0c;会需要经历几个步骤环节&#xff0c;包括店铺设置等。 电商ERP对接上游电商平台&#xff0c;其主要设置为店铺设置、商品同步、库存同步&#xff0c;本次讲解下店铺设置应该如何进行设置&#xff0c;以及在设置过程中的可能出现的踩坑事项。 …