魔法打败魔法!AutoGPT帮写代码驯服ChatGPT

碳基生命永远在想办法偷懒,
近来觉得把任务用提示词准确描述给ChatGPT,输出Code也很累
终于蹲到了魔法杖
GitHub上大火的AutoGPT有了网页版
能自动调用ChatGPT(3.5)
输入自然语言,拆解任务,分段输出
用它来写代码、改代码
把一段代码丢进去让它来说说干了啥

是起飞的感觉

AutoGPT网站

以下尝试基于网页版AutoGPT的实现
网址:https://app.cognosys.ai/
免费,只需要注册一个邮箱(国内邮箱可用,无需科学上网)
在这里插入图片描述

案例教程

以公开网站抓取贵金属过去十年的日频数据为例:
在左侧文本框内输入一段很简陋的指令(自然语言)

输入命令

在这里插入图片描述
别问我为什么不用ChatGPT,想准确的命令和提示词也很费脑
另外:如果懒得输中文,可以Notion之类调用AI汉译英

把下方自动化拆解任务这项功能打开
左上角有一个接入自己Open AI API key的选项,可以不接入,默认网站也可以自己调用(弊端是有时候输出的答案长些 可能内容不全)
在这里插入图片描述

然后点下开始,就可以见证奇迹了

见证魔法

看见中间的窗口开始thinking
然后一直往下跑
我就抱着手
负责看……
(再没我事了)

任务自动化拆解

有一说一,这个任务描述比我的指令健全多了,充分get我的意思
这一步是AutoGPT的核心

完成任务

Completing “1. Develop a web scraper using Python to extract daily palladium prices from the website https://www.lppm.com/data from 2013 to present.”

Sure, I can help you with that task. To extract daily palladium prices from the website https://www.lppm.com/data from 2013 to present, we can use Python’s Beautiful Soup library which is a popular web scraping library.

Here’s the code that will fetch the daily palladium prices from the website and store it in a CSV file:

import requests
from bs4 import BeautifulSoup
import csv# URL of the website to be scraped
url = "https://www.lppm.com/data"# Send a GET request to the website and get the response
response = requests.get(url)# Parse the HTML content using Beautiful Soup
soup = BeautifulSoup(response.content, 'html.parser')# Get the table containing the palladium prices
table = soup.find('table', attrs={'id': 'datatable'})# Get the table rows
rows = table.find_all('tr')# Create a list to hold the data
data = []# Loop through each row and get the data
for row in rows:cols = row.find_all('td')cols = [col.text.strip() for col in cols]data.append(cols)# Write the data to a CSV file
with open('palladium_prices.csv',

Completing “2. Clean and format the extracted data to ensure consistency and accuracy.”

Sure, I can help you with that.

After fetching the data from the website, we can use Python’s pandas library to clean and format the data. Here are the steps I suggest:

First, we need to import the pandas library and read the csv file that we created while fetching the data. We can use the read_csv() method for this.

import pandas as pddf = pd.read_csv('palladium_data.csv')

Next, we need to check the data types of each column and ensure that they are consistent. We can use the dtypes attribute to check the data types.

print(df.dtypes)

This will give us the data types of each column. We can then convert any columns that are not in the correct data type to the correct one.

We also need to check for any missing or null values in the data. We can use the isnull() method to check for any null values.
Copy code

print(df.isnull().sum())
If there are any null values, we can either drop the rows or fill them with appropriate values.

We can also format the date column to ensure consistency. We can use the to_datetime()

Completing “3. Write a function to export the cleaned data to a CSV file for easy analysis and storage.”

Sure, I can help you with that. Here’s a Python function that exports the cleaned data to a CSV file:

import pandas as pddef export_to_csv(data):df = pd.DataFrame(data, columns=['Date', 'Price'])df.to_csv('palladium_prices.csv', index=False)

This function takes in the cleaned data as a list of tuples, where each tuple contains the date and price of the palladium. It then converts this data into a Pandas DataFrame and exports it to a CSV file named “palladium_prices.csv” in the current working directory.

You can call this function at the end of your data cleaning process to export the cleaned data to a CSV file for easy analysis and storage.

多贴心,每一个步骤又给解释,又给方案,生怕你看不懂hhh

开始加戏了

理论上讲任务到这已经完成了
它居然没停
又开始thinking
想了想开始加戏
先帮我清理数据,处理缺省值
又按年把日数据分组(这些后面进一步分析的话都需要做,可谓先见之明)

这个尽心尽力的乙方真的很牛
在这里插入图片描述
然后又是一通针对附加任务的输出
Completing “Create a function to handle missing or null values in the data by either dropping the rows or filling them with appropriate values.”

Sure, I can create a Python function to handle missing or null values in the palladium daily prices data fetched from the website https://www.lppm.com/data. Here is the code:

import pandas as pddef handle_missing_values(data):# Drop rows with missing or null valuesdata.dropna(inplace=True)# Fill missing or null values with appropriate valuesdata.fillna(method='ffill', inplace=True)# Save the data to a CSV filedata.to_csv('palladium_daily_prices.csv', index=False)return data

不光给代码,生怕你看不懂还接着解释
The handle_missing_values function takes in the data variable, which should be a Pandas DataFrame containing the palladium daily prices data fetched from the website. The function first drops any rows with missing or null values using the dropna method. It then fills any remaining missing or null values with the previous valid value using the fillna method with the ffill parameter. Finally, the function saves the cleaned data to a CSV file named palladium_daily_prices.csv using the to_csv method with the index=False parameter.

Please note that this function assumes that the data variable contains(可惜这一段太长了 没输出完)

然后是groupby()函数分年
Completing “Use the groupby() method to group the data by year and calculate the average price of palladium for each year.”

Sure, I can help you with that. Here’s a Python code that can fetch palladium daily prices from the website https://www.lppm.com/data, filter the data from 2013 to now, group the data by year, and calculate the average price of palladium for each year:

import pandas as pd
import requests
from bs4 import BeautifulSoup# Fetch the data from the website
url = "https://www.lppm.com/data"
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')# Find the table with the palladium daily prices
table = soup.find('table', {'id': 'tblPalladium'})# Convert the table to a pandas dataframe
df = pd.read_html(str(table))[0]# Filter the data from 2013 to now
df['Date'] = pd.to_datetime(df['Date'])
df = df[df['Date'].dt.year >= 2013]# Group the data by year and calculate the average price of palladium for each year
df_grouped = df.groupby(df['Date'].dt.year)['Price'].mean()# Save the data to a csv file
df_grouped.to_csv('palladium_prices.csv')

最后他是嫌我任务太简单,没有可视化吗
再来个图
Completing “Create a visualization of the palladium prices over time using a line chart in Python’s Matplotlib library.”

Sure, I can help you with that. Here is the Python code to fetch the palladium daily prices from the website https://www.lppm.com/data from 2013-now and create a CSV file to record the data:

import requests
from bs4 import BeautifulSoup
import csvurl = "https://www.lppm.com/data"
response = requests.get(url)soup = BeautifulSoup(response.content, 'html.parser')
table = soup.find_all('table')[0]rows = table.find_all('tr')
data = []
for row in rows:cols = row.find_all('td')cols = [col.text.strip() for col in cols]data.append(cols)with open('palladium_prices.csv', 'w', newline='') as file:writer = csv.writer(file)writer.writerows(data)

This code will fetch the data from the website and write it to a CSV file named palladium_prices.csv.

To create a line chart of the palladium prices over time, we can use Python’s Matplotlib library. Here is the code to do that:

import pandas as pd
import matplotlib.pyplot as pltdf = pd.read_csv('palladium_prices.csv')
df['Date

就在我疑惑他为什么还没结束的时候
又开始加戏了
在这里插入图片描述
他竟然开始处理异常值!!!!
Completing “Create a function to handle outliers in the data by either dropping them or replacing them with appropriate values.”

Sure, I can help you with that. Here is a Python code that fetches palladium daily prices from the website https://www.lppm.com/data from 2013-now and creates a CSV file to record the data.

(其实大可不必)

import pandas as pd
import requests
from bs4 import BeautifulSoupdef fetch_palladium_prices():url = 'https://www.lppm.com/data'headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'}response = requests.get(url, headers=headers)soup = BeautifulSoup(response.text, 'html.parser')table = soup.find_all('table')[0]df = pd.read_html(str(table))[0]df.columns = df.columns.droplevel()df = df.rename(columns={'Date': 'date', 'Palladium AM': 'palladium_am', 'Palladium PM': 'palladium_pm'})df['date'] = pd.to_datetime(df['date'], format='%d-%b-%y')df.to_csv('

然后调用describe函数,统计学对数据进行了一定的描述
Completing “Use the describe() method to get a summary of the statistical measures of the data.”

Sure, here’s a Python code snippet that fetches daily palladium prices from the LPPM website and creates a CSV file to record the data. This code uses the pandas library to read the HTML table from the website and convert it into a DataFrame, which can then be used to compute the summary statistics using the describe() method.

import pandas as pd# Define the URL and read the HTML table
url = 'https://www.lppm.com/data'
dfs = pd.read_html(url)# Select the table containing the palladium prices
df = dfs[3]# Rename the columns and set the date as the index
df.columns = ['Date', 'USD', 'EUR', 'GBP', 'JPY']
df['Date'] = pd.to_datetime(df['Date'], format='%d-%b-%y')
df.set_index('Date', inplace=True)# Save the data to a CSV file
df.to_csv('palladium_prices.csv')# Compute the summary statistics using the describe() method
summary = df.describe()
print(summary)

This code should fetch the palladium prices from the LPPM website, create a CSV file called palladium_prices.csv to record the data

最后的最后,我在想如果还不结束我就stop的时候
(看他演戏演的也没耐心了,观众坐不住了)
他开始画分布
Completing “Create a histogram of the palladium prices to visualize the distribution of the data.”

Sure, I can help you with that. Here’s the code that can fetch the palladium daily prices from the website and create a CSV file to record the data:

import requests
from bs4 import BeautifulSoup
import csvurl = 'https://www.lppm.com/data'response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')table = soup.find('table', {'class': 'table-data'})
rows = table.find_all('tr')data = []
for row in rows:cols = row.find_all('td')cols = [col.text.strip() for col in cols]data.append(cols)with open('palladium_prices.csv', 'w', newline='') as csvfile:writer = csv.writer(csvfile)writer.writerows(data)

To create a histogram of the palladium prices, you can add the following code to the end of the above code:

import pandas as pd
import matplotlib.pyplot as pltdf = pd.read_csv('palladium_prices.csv')
prices = df['Price']plt.hist(prices, bins=20)
plt.title('Histogram of Palladium Prices')
plt.xlabel('Price')
plt.ylabel('Frequency')
plt.show()

在这里插入图片描述
干完这些活以后他大概还有余力,跟打了鸡血似的继续输出
开始计算标的价格和其它经济指标之间的相关性,画散点图……
这一番操作下来,一篇期中大作业得有了吧
(不禁怀疑他学习了多少份金融学大作业hhh)
……
虽然完全用不到,写都写了
把代码贴在下面

处理异常值:
Completing “Create a Python function to handle outliers in the palladium daily prices data by either dropping them or replacing them with appropriate values.”

Sure, I can help you with that. Here’s a Python function that fetches the palladium daily prices from the website you mentioned and handles the outliers by dropping them:

import pandas as pd
import requests
from bs4 import BeautifulSoupdef fetch_palladium_data():url = 'https://www.lppm.com/data'response = requests.get(url)soup = BeautifulSoup(response.content, 'html.parser')table = soup.find('table')df = pd.read_html(str(table))[0]df.columns = df.columns.droplevel()df = df.rename(columns={'Unnamed: 0': 'Date'})df['Date'] = pd.to_datetime(df['Date'])df = df.set_index('Date')# Handling outliersdf = df.drop(df[df['Palladium'] < 500].index)# Writing data to a CSV filedf.to_csv('palladium_daily_prices.csv')

This function fetches the data from the website, cleans it, drops outliers with a value less than 500, and writes the clean data to a CSV file named palladium_daily_prices.csv. You can call this function as follows:

fetch_palladium

相关性分析
Completing “Use the corr() method in pandas to calculate the correlation between the palladium prices and other economic factors, such as inflation or stock prices.”

Sure, I can help you with that. Here’s a Python code that fetches daily palladium prices from the website https://www.lppm.com/data and calculates the correlation between palladium prices and inflation:

import pandas as pd
import requests
from bs4 import BeautifulSoup# Fetch data from website
url = 'https://www.lppm.com/data'
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
table = soup.find('table')
data = []# Extract palladium prices
for row in table.find_all('tr'):cols = row.find_all('td')if len(cols) == 4:date = cols[0].text.strip()price = float(cols[1].text.strip().replace(',', ''))data.append({'Date': date, 'Palladium Price': price})# Convert data to pandas dataframe
df = pd.DataFrame(data)# Fetch inflation data
inflation_url = 'https://www.statbureau.org/en/united-states/inflation-tables'
inflation_data = pd.read_html(inflation_url)[0]# Remove unnecessary columns and rename columns
inflation_data = inflation_data[['Year', 'Annual Average']]
inflation_data.columns =

相关性分析画图
Completing “Create a Python function to plot the palladium prices against other economic factors using a scatter plot in Matplotlib.”

Sure, here’s a Python function that fetches the daily palladium prices from the LPPM website and plots it against other economic factors using a scatter plot in Matplotlib:

import pandas as pd
import matplotlib.pyplot as plt
import requests
from bs4 import BeautifulSoupdef plot_palladium_prices():# Fetch the data from the LPPM websiteurl = 'https://www.lppm.com/data'r = requests.get(url)soup = BeautifulSoup(r.content, 'html.parser')table = soup.find('table', {'class': 'table table-striped table-bordered table-hover dataTable no-footer'})# Convert the data to a pandas dataframedata = []for tr in table.find_all('tr'):row = []for td in tr.find_all('td'):row.append(td.text.strip())data.append(row)df = pd.DataFrame(data[1:], columns=data[0])df = df.iloc[::-1].reset_index(drop=True)df['Date'] = pd.to_datetime(df['Date'], format='%d-%b-%y')df['Palladium'] = pd.to_numeric(df['Palladium'].str.replace(',', ''))# Plot the data

终于!!!!他想了想
任务终于结束了
在这里插入图片描述
突然有一种领导想要结束会议,员工还巴拉巴拉讲,突然停止之后,大家如释重负的感觉

不得不说,AutoGPT驯化之后的GPT,虽然是3.5版本,但也真的太敬业了

敬业的令人发指

什么时候它能帮我把全部输出的内容直接搬运到CSDN呢?

最后让我们回顾一下他复杂的任务线
在这里插入图片描述
还记得我输入的指令是什么吗?

我自己都忘了……

最后再放上这个神奇的网站
https://app.cognosys.ai/

这一篇写了输入自然语言,输出代码
下一篇尝试输入代码,输出自然语言+代码

看看如何用它优化代码的(代码本身能跑,就是循环效率不行,跑死机了)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/11597.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

ChatGPT 联网和插件功能,下周起可直接使用,无需排队!

夕小瑶科技说 分享 来源 | 新智元 OpenAI和谷歌&#xff0c;已经打得急红了眼&#xff0c;ChatGPT Plus用户&#xff0c;下周就可以体验联网和插件功能&#xff0c;无需再排队。鲨疯了&#xff0c;真的鲨疯了&#xff01; ChatGPT&#xff0c;下周开始联网&#xff0c;并开放插…

ChatGPT正式开放「上网能力」!

点击“开发者技术前线”&#xff0c;选择“星标” 让一部分开发者看到未来 来自&#xff1a;新智元 【导读】似乎就在昨天&#xff0c;OpenAI突然启用了上周才发布的「联网插件」。 刚刚&#xff0c;ChatGPT可以正式「上网」了&#xff01; ChatGPT联网小测试 既然能接入网络&a…

电脑网络连接怎么设置

不知道您是否想两台电脑同时使用上网呢&#xff1f;如果是的话&#xff0c;您需要去买一台路由器&#xff08;如何安装可以问卖家&#xff09;&#xff0c;这样就可以同时使用两台电脑上网了。 如果你只是想同一时间只用一台电脑上网的话&#xff1a; 拉好网线新建 “网络连接…

【OpenAI | ChatGPT“超进化”】打造AI生态

引言&#xff1a;特斯拉前AI主管Andrej Karpathy提出&#xff1a;GPT类模型是一种运行在文本上的新型计算机架构&#xff0c;它不仅可以与我们人类交谈&#xff0c;也可以与现有的软件基础设施“交谈”&#xff0c;API是第一步&#xff0c;插件是第二步。 写在前面&#xff1a;…

AI是超越还是桎梏?从ChatGPT到5G+AI,我们在聊什么?

从家常里短聊到科技创新&#xff0c;从人文故事探讨到物理科学&#xff0c;诞生2个月用户即破亿的ChatGPT正成为火爆全球的AI应用工具&#xff0c;其强大的能力超乎人们想象。这款几乎博学多识的聊天机器人能运用AI系统进行简洁的交流&#xff0c;完成各种指令信息的表达。面对…

Sam Altman 国会质询2.5万字全文:如果这项技术出错,它会出错得很严重

‍ 作者&#xff1a;城主 | 编辑&#xff1a;腾讯新闻 郝博阳 &#xff08;字号在右上菜单选调整字体&#xff09;划重点&#xff1a; 1. Sam Altman 认为与所有的技术革命一样&#xff0c;AI会对就业产生重大影响。但GPT-4是一个工具&#xff0c;而不是一种生物&#xff0c;它…

搜索引擎技术大战,始于昨日

作者 | 褚杏娟、核子可乐 IT 届很久没有像这几天这样因为某个技术热闹了&#xff0c;ChatGPT 则是那条将水搅浑的“鲶鱼”。 几乎在同一天&#xff0c;谷歌 CEO Sundar Pichai 先在官方博客上宣布推出谷歌下一代 AI 对话系统 Bard&#xff0c;以此应对 ChatGPT&#xff1b;百度…

AI时代的创业指南:四个全新趋势

ChatGPT狂飙160天&#xff0c;世界已经不是之前的样子。 新建了人工智能中文站https://ai.weoknow.com 每天给大家更新可用的国内可用chatGPT资源 前几天看到老牌基金NEA发表了一篇新文章&#xff1a;《Rewriting the Startup Playbook for the Age of AI》&#xff0c;主要内…

【黑科技】基于GPT开发的4款免费AI办公神器,实用又强悍!

今天给大家推荐4个非常实用且非常强悍的工具&#xff0c;基于GPT开发的AI工具 &#xff0c;每一款都能够大大提升我们的学习和工作效率 &#xff0c;希望你能看到并在收藏的同时点个赞或关注&#xff0c;万分感谢 。 Chat2doc 这是一个对学生党和打工人都特别有用的AI工具 &a…

人工智能大模型 LLM 将从根本上改变软件工程吗?我们曾经嘲笑 “用嘴的速度(speed of mouth)写代码“,但现在这已成为现实。

(来源:Blog | the scapegoat dev) 前言 人工智能的发展已经深刻地改变了我们的生活和工作方式,使得我们能够在各种领域中实现更加复杂和高效的任务。其中包括自然语言处理(NLP)和机器翻译等领域,这些领域最近出现了一种新的技术——基于大型语言模型的自适应技术,也称…

ChatGPT与Google Bard那个更好?

AIGC 行业最大的两个竞争对手&#xff1a;ChatGPT vs Google Bard! 本文介绍这两个人工智能引擎之间的技术差异。 截至目前Google Bard和ChatGPT之间最大的区别是&#xff1a;Bard知道ChatGPT&#xff0c;但ChatGPT却对Bard懵然不知。虽然我们可以玩转ChatGPT&#xff0c;而Ba…

以账号为例,来说说解决问题的思路和方向。

Part1账号的重要性 1沉淀客户资源 电商行业&#xff0c;账号基本都是公司的&#xff0c;但是实体行业还是有很多并没有这样做。 房产中介&#xff0c;家装建材&#xff0c;二手车等&#xff0c;尤其是无底薪纯提成模式的&#xff0c;基本上都是用的个人的微信号和手机号。 从成…

公开信发起人Tegmark:中国得天独厚,为AI安全和治理做贡献丨2023智源大会回顾...

导读 近日&#xff0c;MIT教授、「暂停大型AI实验」公开信的发起人之一Max Tegmark发表了题为「如何掌控 AI」的精彩演说&#xff0c;介绍了发展可控 AI 的重要意义&#xff0c;提出了通过「机械可解释性」实现可控 AI 的技术路线&#xff0c;同时也肯定了中国在 AI 安全领域的…

跨越AI大门,一本翻译蓝皮书、一场人机共译比赛投射出怎样的未来?

大语言模型LLM是这几个月热度最高的全球明星。目前看来&#xff0c;AI逐渐走向复杂专业的白领型岗位&#xff0c;已经是必然的趋势。 OpenAI刚刚发布的35页论文中&#xff0c;提到了最容易受到LLM影响的职业中译员和笔译员排名第二。 以前机器翻译就是出国旅游时拍拍路标、看看…

勇创世界一流!移动云为我国数字经济发展提供有力支撑

今年2月&#xff0c;中共中央、国务院印发了《数字中国建设整体布局规划》&#xff0c;通过顶层设计及布局&#xff0c;擘画了我国数字经济发展蓝图。近日在国新办举行的第六届数字中国建设峰会新闻发布会上&#xff0c;相关负责人也对我国数字经济现阶段取得的成绩进行了总结&…

第1章信息化发展

广义的信息技术可以追溯到3500~5000年前人类语言的形成和使用&#xff0c;信息技术持续经历了文字的创造、印刷术的发明、电脉冲和电磁的发现与应用、计算机技术发展、新一代信息技术应用等历程。可以看出&#xff0c;信息技术的发展历程&#xff0c;伴随着人类信息沉淀的丰富、…

2023,VC投资的分水岭

作者&#xff1a;矛哥 首发于公众号&#xff1a;常垒资本&#xff08;ID&#xff1a;conswall_cap&#xff09; 题记 人生就是不断地“把握时间&#xff0c;掌握方向”&#xff0c;难过的事情总会过去&#xff0c;也会被遗忘。 我一直觉得&#xff0c;人生就像一条路&#xff0…

李彦宏百度内部讲话:“层层递减”不可取,提高站位看长期

雷递网 乐天 1月6日 2022年底&#xff0c;中国科技互联网公司仿佛有默契地一起释放调整信号&#xff0c;强调新一年要降本增效、更看重稳健经营。 2022年12月末&#xff0c;百度举办全员会&#xff08;内部称为“简单之约”&#xff09;&#xff0c;百度创始人、董事长兼CEO李彦…

ChatGPT或致全球3亿人失业,人工智能时代下教育会发生什么样的变革?

不久前&#xff0c;谷歌教育发布了一份关于未来教育的研究报告。该报告由谷歌公司和 Canvas8合作&#xff0c;对来自世界24个国家的94位教育专家进行了历时长达2年的调研&#xff0c;探讨了未来教育形态、教育在未来的作用、教育公平、全球人才需求、教学方式、学习生态、工作技…

中国开源创新启示录!

作者 | 郑丽媛、屠敏 出品 | CSDN&#xff08;ID&#xff1a;CSDNnews&#xff09; 开源是科技创新的重要模式&#xff0c;也是数字经济发展的底座。2021年&#xff0c;“开源”首次被明确列入“十四五”规划&#xff0c;上升至国家战略层面。2022年1月发布的《“十四五”数字经…