使用 Elasticsearch 检测抄袭 (二)

我在在之前的文章 “使用 Elasticsearch 检测抄袭 (一)” 介绍了如何检文章抄袭。这个在许多的实际使用中非常有意义。我在  CSDN 上的文章也经常被人引用或者抄袭。有的人甚至也不用指明出处。这对文章的作者来说是很不公平的。文章介绍的内容针对很多的博客网站也非常有意义。在那篇文章中,我觉得针对一些开发者来说,不一定能运行的很好。在今天的这篇文章中,我特意使用本地部署,并使用 jupyter notebook 来进行一个展示。这样开发者能一步一步地完整地运行起来。

安装

安装 Elasticsearch 及 Kibana

如果你还没有安装好自己的 Elasticsearch 及 Kibana,那么请参考一下的文章来进行安装:

  • 如何在 Linux,MacOS 及 Windows 上进行安装 Elasticsearch

  • Kibana:如何在 Linux,MacOS 及 Windows 上安装 Elastic 栈中的 Kibana

在安装的时候,请选择 Elastic Stack 8.x 进行安装。在安装的时候,我们可以看到如下的安装信息:

为了能够上传向量模型,我们必须订阅白金版或试用。

上传模型

注意:如果我们在这里通过命令行来进行上传模型的话,那么你就不需要在下面的代码中来实现上传。可以省去那些个步骤。

我们可以参考之前的文章 “Elasticsearch:使用 NLP 问答模型与你喜欢的圣诞歌曲交谈”。我们使用如下的命令来上传 OpenAI detection 模型:

eland_import_hub_model --url https://elastic:o6G_pvRL=8P*7on+o6XH@localhost:9200 \--hub-model-id roberta-base-openai-detector \--task-type text_classification \--ca-cert /Users/liuxg/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt \--start

在上面,我们需要根据自己的配置修改上面的证书路径,Elasticsearch 的访问地址。

我们可以在 Kibana 中查看最新上传的模型:

接下来,按照同样的方法,我们安装文本嵌入模型。

eland_import_hub_model --url https://elastic:o6G_pvRL=8P*7on+o6XH@localhost:9200 \--hub-model-id sentence-transformers/all-mpnet-base-v2 \--task-type text_embedding \--ca-cert /Users/liuxg/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt \--start	

为了方便大家学习,我们可以在如下的地址下载代码:

git clone https://github.com/liu-xiao-guo/elasticsearch-labs

我们可以在如下的位置找到 jupyter notebook:

$ pwd
/Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
$ ls
plagiarism_detection_es_self_managed.ipynb

运行代码

接下来,我们开始运行 notebook。我们首先安装相应的 python 包:

pip3 install elasticsearch==8.11
pip3 -q install eland elasticsearch sentence_transformers transformers torch==2.1.0

在运行代码之前,我们先设置如下的变量:

export ES_USER="elastic"
export ES_PASSWORD="o6G_pvRL=8P*7on+o6XH"
export ES_ENDPOINT="localhost"

我们还需要把 Elasticsearch 的证书拷贝到当前的目录中:

$ pwd
/Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
$ cp ~/elastic/elasticsearch-8.11.0/config/certs/http_ca.crt .
$ ls
http_ca.crt                                plagiarism_detection_es_self_managed.ipynb
plagiarism_detection_es.ipynb

导入包:

from elasticsearch import Elasticsearch, helpers
from elasticsearch.client import MlClient
from eland.ml.pytorch import PyTorchModel
from eland.ml.pytorch.transformers import TransformerModel
from urllib.request import urlopen
import json
from pathlib import Path
import os

连接到 Elasticsearch

elastic_user=os.getenv('ES_USER')
elastic_password=os.getenv('ES_PASSWORD')
elastic_endpoint=os.getenv("ES_ENDPOINT")url = f"https://{elastic_user}:{elastic_password}@{elastic_endpoint}:9200"
client = Elasticsearch(url, ca_certs = "./http_ca.crt", verify_certs = True)print(client.info())

上传 detector 模型

hf_model_id ='roberta-base-openai-detector'
tm = TransformerModel(model_id=hf_model_id, task_type="text_classification")#set the modelID as it is named in Elasticsearch
es_model_id = tm.elasticsearch_model_id()# Download the model from Hugging Face
tmp_path = "models"
Path(tmp_path).mkdir(parents=True, exist_ok=True)
model_path, config, vocab_path = tm.save(tmp_path)# Load the model into Elasticsearch
ptm = PyTorchModel(client, es_model_id)
ptm.import_model(model_path=model_path, config_path=None, vocab_path=vocab_path, config=config)#Start the model
s = MlClient.start_trained_model_deployment(client, model_id=es_model_id)
s.body

我们可以在 Kibana 中进行查看:

上传 text embedding 模型

hf_model_id='sentence-transformers/all-mpnet-base-v2'
tm = TransformerModel(model_id=hf_model_id, task_type="text_embedding")#set the modelID as it is named in Elasticsearch
es_model_id = tm.elasticsearch_model_id()# Download the model from Hugging Face
tmp_path = "models"
Path(tmp_path).mkdir(parents=True, exist_ok=True)
model_path, config, vocab_path = tm.save(tmp_path)# Load the model into Elasticsearch
ptm = PyTorchModel(client, es_model_id)
ptm.import_model(model_path=model_path, config_path=None, vocab_path=vocab_path, config=config)# Start the model
s = MlClient.start_trained_model_deployment(client, model_id=es_model_id)
s.body

我们可以在 Kibana 中查看:

创建源索引

client.indices.create(
index="plagiarism-docs",
mappings= {"properties": {"title": {"type": "text","fields": {"keyword": {"type": "keyword"}}},"abstract": {"type": "text","fields": {"keyword": {"type": "keyword"}}},"url": {"type": "keyword"},"venue": {"type": "keyword"},"year": {"type": "keyword"}}
})

我们可以在 Kibana 中进行查看:

创建 checker ingest pipeline

client.ingest.put_pipeline(id="plagiarism-checker-pipeline",processors = [{"inference": { #for ml models - to infer against the data that is being ingested in the pipeline"model_id": "roberta-base-openai-detector", #text classification model id"target_field": "openai-detector", # Target field for the inference results"field_map": { #Maps the document field names to the known field names of the model."abstract": "text_field" # Field matching our configured trained model input. Typically for NLP models, the field name is text_field.}}},{"inference": {"model_id": "sentence-transformers__all-mpnet-base-v2", #text embedding model model id"target_field": "abstract_vector", # Target field for the inference results"field_map": { #Maps the document field names to the known field names of the model."abstract": "text_field" # Field matching our configured trained model input. Typically for NLP models, the field name is text_field.}}}]
)

我们可以在 Kibana 中进行查看:

创建 plagiarism checker 索引

client.indices.create(
index="plagiarism-checker",
mappings={
"properties": {"title": {"type": "text","fields": {"keyword": {"type": "keyword"}}},"abstract": {"type": "text","fields": {"keyword": {"type": "keyword"}}},"url": {"type": "keyword"},"venue": {"type": "keyword"},"year": {"type": "keyword"},"abstract_vector.predicted_value": { # Inference results field, target_field.predicted_value"type": "dense_vector","dims": 768, # embedding_size"index": "true","similarity": "dot_product" #  When indexing vectors for approximate kNN search, you need to specify the similarity function for comparing the vectors.}}
}
)

我们可以在 Kibana 中进行查看:

写入源文档

我们首先把地址 https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/datasets/emnlp2016-2018.json 里的文档下载到当前目录下:

$ pwd
/Users/liuxg/python/elasticsearch-labs/supporting-blog-content/plagiarism-detection-with-elasticsearch
$ ls
emnlp2016-2018.json                        plagiarism_detection_es.ipynb
http_ca.crt                                plagiarism_detection_es_self_managed.ipynb
models

如上所示,emnlp2016-2018.json  就是我们下载的文档。

# Load data into a JSON object
with open('emnlp2016-2018.json') as f:data_json = json.load(f)print(f"Successfully loaded {len(data_json)} documents")def create_index_body(doc):""" Generate the body for an Elasticsearch document. """return {"_index": "plagiarism-docs","_source": doc,}# Prepare the documents to be indexed
documents = [create_index_body(doc) for doc in data_json]# Use helpers.bulk to index
helpers.bulk(client, documents)print("Done indexing documents into `plagiarism-docs` source index")

我们可以在 Kibana 中进行查看:

使用 ingest pipeline 进行 reindex

client.reindex(wait_for_completion=False,source={"index": "plagiarism-docs"},dest= {"index": "plagiarism-checker","pipeline": "plagiarism-checker-pipeline"}
)

在上面,我们设置 wait_for_completion=False。这是一个异步的操作。我们需要等一段时间让上面的 reindex 完成。我们可以通过检查如下的文档数:

上面表明我们的文档已经完成。我们再接着查看一下 plagiarism-checker 索引中的文档:

检查重复文字

direct plagarism

model_text = 'Understanding and reasoning about cooking recipes is a fruitful research direction towards enabling machines to interpret procedural text. In this work, we introduce RecipeQA, a dataset for multimodal comprehension of cooking recipes. It comprises of approximately 20K instructional recipes with multiple modalities such as titles, descriptions and aligned set of images. With over 36K automatically generated question-answer pairs, we design a set of comprehension and reasoning tasks that require joint understanding of images and text, capturing the temporal flow of events and making sense of procedural knowledge. Our preliminary results indicate that RecipeQA will serve as a challenging test bed and an ideal benchmark for evaluating machine comprehension systems. The data and leaderboard are available at http://hucvl.github.io/recipeqa.'response = client.search(index='plagiarism-checker', size=1,knn={"field": "abstract_vector.predicted_value","k": 9,"num_candidates": 974,"query_vector_builder": {"text_embedding": {"model_id": "sentence-transformers__all-mpnet-base-v2","model_text": model_text}}}
)for hit in response['hits']['hits']:score = hit['_score']title = hit['_source']['title']abstract = hit['_source']['abstract']openai = hit['_source']['openai-detector']['predicted_value']url = hit['_source']['url']if score > 0.9:print(f"\nHigh similarity detected! This might be plagiarism.")print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")if openai == 'Fake':print("This document may have been created by AI.\n")elif score < 0.7:print(f"\nLow similarity detected. This might not be plagiarism.")if openai == 'Fake':print("This document may have been created by AI.\n")else:print(f"\nModerate similarity detected.")print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")if openai == 'Fake':print("This document may have been created by AI.\n")ml_client = MlClient(client)model_id = 'roberta-base-openai-detector' #open ai text classification modeldocument = [{"text_field": model_text}
]ml_response = ml_client.infer_trained_model(model_id=model_id, docs=document)predicted_value = ml_response['inference_results'][0]['predicted_value']if predicted_value == 'Fake':print("Note: The text query you entered may have been generated by AI.\n")

similar text - paraphrase plagiarism

model_text = 'Comprehending and deducing information from culinary instructions represents a promising avenue for research aimed at empowering artificial intelligence to decipher step-by-step text. In this study, we present CuisineInquiry, a database for the multifaceted understanding of cooking guidelines. It encompasses a substantial number of informative recipes featuring various elements such as headings, explanations, and a matched assortment of visuals. Utilizing an extensive set of automatically crafted question-answer pairings, we formulate a series of tasks focusing on understanding and logic that necessitate a combined interpretation of visuals and written content. This involves capturing the sequential progression of events and extracting meaning from procedural expertise. Our initial findings suggest that CuisineInquiry is poised to function as a demanding experimental platform.'response = client.search(index='plagiarism-checker', size=1,knn={"field": "abstract_vector.predicted_value","k": 9,"num_candidates": 974,"query_vector_builder": {"text_embedding": {"model_id": "sentence-transformers__all-mpnet-base-v2","model_text": model_text}}}
)for hit in response['hits']['hits']:score = hit['_score']title = hit['_source']['title']abstract = hit['_source']['abstract']openai = hit['_source']['openai-detector']['predicted_value']url = hit['_source']['url']if score > 0.9:print(f"\nHigh similarity detected! This might be plagiarism.")print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")if openai == 'Fake':print("This document may have been created by AI.\n")elif score < 0.7:print(f"\nLow similarity detected. This might not be plagiarism.")if openai == 'Fake':print("This document may have been created by AI.\n")else:print(f"\nModerate similarity detected.")print(f"\nMost similar document: '{title}'\n\nAbstract: {abstract}\n\nurl: {url}\n\nScore:{score}\n")if openai == 'Fake':print("This document may have been created by AI.\n")ml_client = MlClient(client)model_id = 'roberta-base-openai-detector' #open ai text classification modeldocument = [{"text_field": model_text}
]ml_response = ml_client.infer_trained_model(model_id=model_id, docs=document)predicted_value = ml_response['inference_results'][0]['predicted_value']if predicted_value == 'Fake':print("Note: The text query you entered may have been generated by AI.\n")

完整的代码可以在地址下载:https://github.com/liu-xiao-guo/elasticsearch-labs/blob/main/supporting-blog-content/plagiarism-detection-with-elasticsearch/plagiarism_detection_es_self_managed.ipynb

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/225432.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

解决Pycharm pip安装模块太慢问题,pycharm2022没有manage repositories配置镜像源

解决方案 方法清华阿里云中国科技大学华中理工大学 或者直接-i 加镜像 方法 URL写下面任意一个 清华 https://pypi.tuna.tsinghua.edu.cn/simple阿里云 http://mirrors.aliyun.com/pypi/simple/中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/华中理工大学 http:/…

前端项目重构的深度思考和复盘

摘要&#xff1a; 项目重构是每一家稳定发展的互联企业的必经之路, 就像一个产品的诞生, 会经历产品试错和产品迭代 一样, 随着业务或新技术的不断发展, 已有架构已无法满足更多业务扩展的需求, 所以只有通过重构来让产品“进化”, 才能跟上飞速发展的时代浪潮. 技术因素 早期…

TVS 管选型与 ESD 防护设计

文章目录 ESD 防护设计 TVS管的基础特性 TVS管的选型方法 TVS管布局细节 参考文献 ESD 防护设计 静电防护设计是让电路板外接的各类金属按钮开关在接触到外界空气放电或接触放电时&#xff0c;在这种瞬间出现的大能量注入到电路板后&#xff0c;能够通过某种设计好的通道泄…

python(上半部分)

第一部分 1、input()语句默认结果是字符串 2、type()可以判断变量的类型 3、input()输出语句 &#xff08;默认为字符串类型&#xff09; 4、命名规则&#xff1a;中文、英文、数字、_&#xff0c;数字不可开头&#xff0c;大小写敏感。 5、 %s&#xff1a;将内容转换成…

vue3+ts 可视化大屏无限滚动table效果实现

注意&#xff1a;vue3版本需使用 vue3-seamless-scroll npm npm install vue3-seamless-scroll --save页面引入 TS import { Vue3SeamlessScroll } from "vue3-seamless-scroll";代码使用&#xff08;相关参数可参考&#xff1a;https://www.npmjs.com/package/vu…

Java整合APNS推送消息-IOS-APP(基于.p12推送证书)

推送整体流程 1.在开发者中心申请对应的证书&#xff08;我用的是.p12文件&#xff09; 2.苹果手机用户注册到APNS&#xff0c;APNS将注册的token返回给APP&#xff08;服务端接收使用&#xff09;。 3.后台服务连接APNS&#xff0c;获取连接对象 4.后台服务构建消息载体 5.后台…

html table+css实现可编辑表格

要实现可编辑的 HTML 表格&#xff0c;你可以使用 JavaScript 和 HTML5 的 contenteditable 属性。 <!DOCTYPE html> <html> <head><style>table {border-collapse: collapse;width: 100%;}th, td {border: 1px solid black;padding: 8px;text-align:…

LENOVO联想笔记本小新Pro 14 IRH8 2023款(83AL)电脑原装出厂Win11系统恢复预装OEM系统

链接&#xff1a;https://pan.baidu.com/s/1M1iSFahokiIHF3CppNpL4w?pwdzr8y 提取码&#xff1a;zr8y 联想原厂系统自带所有驱动、出厂主题壁纸、Office办公软件、联想电脑管家等自带的预装软件程序 所需要工具&#xff1a;16G或以上的U盘 文件格式&#xff1a;ISO 文件…

关于Zoom ZTP和AudioCodes Ltd桌面电话缺陷暴露,导致用户遭受窃听的动态情报

一、基本内容 近期SySS安全研究员发布分析报告显示&#xff0c;Zoom的零接触&#xff08;ZTP&#xff09;和AudioCodes Ltd桌面电话配置功能中发现高危漏洞&#xff0c;可以获得对设备的完全远程控制并不受限制的访问可以被武器化&#xff0c;以窃听房间或电话、通过设备并攻击…

使用poi将pptx文件转为图片详解

目录 项目需求 后端接口实现 1、引入poi依赖 2、代码编写 1、controller 2、service层 测试出现的bug 小结 项目需求 前端需要上传pptx文件&#xff0c;后端保存为图片&#xff0c;并将图片地址保存数据库&#xff0c;最后大屏展示时显示之前上传的pptx的图片。需求看上…

Jmeter的接口自动化测试

在去年实施了一年的三端&#xff08;PC、无线M站、无线APP【Android、IOS】&#xff09;后&#xff0c;今年7月份开始&#xff0c;我们开始进行接口自动化的实施&#xff0c;目前已完成了整个框架的搭建以及接口的持续测试集成。今天做个简单的分享。 在开始自动化投入前&#…

[Angular] 笔记 10:服务与依赖注入

什么是 Services & Dependency Injection? chatgpt 回答&#xff1a; 在 Angular 中&#xff0c;Services 是用来提供特定功能或执行特定任务的可重用代码块。它们可以用于处理数据、执行 HTTP 请求、管理应用程序状态等。Dependency Injection&#xff08;依赖注入&#…

安装Kubernetes1.23、kubesphere3.4、若依项目自动打包部署到K8S记录

1.安装kubernetes1.23详细教程 kubernetes(k8s)集群超级详细超全安装部署手册 - 知乎 2.安装rancher动态存储 kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml3.安装kubesphere3.4 准备工作 您…

XUbuntu22.04之跨平台容器格式工具:MKVToolNix(二百零三)

简介&#xff1a; CSDN博客专家&#xff0c;专注Android/Linux系统&#xff0c;分享多mic语音方案、音视频、编解码等技术&#xff0c;与大家一起成长&#xff01; 优质专栏&#xff1a;Audio工程师进阶系列【原创干货持续更新中……】&#x1f680; 优质专栏&#xff1a;多媒…

电路设计(7)——窗口比较器的multism仿真

1.功能设计 构建一个窗口比较器的电路&#xff0c;在输入电压大于3.5v&#xff0c;小于0.8v时&#xff0c;蜂鸣器报警&#xff0c;输入电压在0.8v到3.5v之间时&#xff0c;不报警。 整体电路如下&#xff1a; 2.设计思路 在输入端&#xff0c;采取电阻分压的方式&#xff0c;输…

如何学习自动化测试?(附教程+源码)

自动化测试介绍 自动化测试(Automated Testing)&#xff0c;是指把以人为驱动的测试行为转化为机器执行的过程。实际上自动化测试往往通过一些测试工具或框架&#xff0c;编写自动化测试用例&#xff0c;来模拟手工测试过程。比如说&#xff0c;在项目迭代过程中&#xff0c;持…

枚举的使用

背景以及定义 枚举是在jdk1.5以后引入的.主要用途是:将常量组织起来,在这之前表示一组常量通常使用定义常量的方式: public static final int RED 1; public static final int GREEN 2; public static final int BLUE 3; 但是常量举例有不好的地方,例如:可能碰巧有一个数字…

数据库进阶教学——主从复制(Ubuntu22.04主+Win10从)

目录 一、概述 二、原理 三、搭建 1、备份数据 2、主库配置Ubuntu22.04 2.1、设置阿里云服务器安全组 2.2、修改配置文件 /etc/my.cnf 2.3、重启MySQL服务 2.4、登录mysql&#xff0c;创建远程连接的账号&#xff0c;并授予主从复制权限 2.5、通过指令&#xff0c;查…

043、循环神经网络

之——RNN基础 杂谈 第一个对于序列模型的网络&#xff0c;RNN。 正文 1.潜变量自回归模型 潜变量总结过去的信息&#xff0c;再和当前信息一起结合出新的信息。 2.RNN 循环神经网络将观察作为x&#xff0c;与前层隐变量结合得到输出 其中Whh蕴含了整个模型的时序信息&#xf…

如何使用ArcGIS Pro将Excel表转换为SHP文件

有的时候我们得到的数据是一张张的Excel表格&#xff0c;如果想要在ArcGIS Pro中进行分析或者制图则需要先转换为SHP格式&#xff0c;这里为大家介绍一下转换方法&#xff0c;希望能对你有所帮助。 数据来源 本教程所使用的数据是从水经微图中下载的POI数据&#xff0c;除了P…