AWS-S3实现Minio分片上传、断点续传、秒传、分片下载、暂停下载

文章目录

  • 前言
  • 一、功能展示
    • 上传功能点
    • 下载功能点
    • 效果展示
  • 二、思路流程
    • 上传流程
    • 下载流程
  • 三、代码示例
  • 四、疑问

前言

Amazon Simple Storage Service(S3),简单存储服务,是一个公开的云存储服务。Web应用程序开发人员可以使用它存储数字资产,包括图片、视频、音乐和文档。S3提供一个RESTful API以编程方式实现与该服务的交互。目前市面上主流的存储厂商都支持S3协议接口。

本文借鉴风希落https://www.cnblogs.com/jsonq/p/18186340大佬的文章及代码修改而来。

项目采用前后端分离模式:
前端:vue3 + element-plus + axios + spark-md5
后端:Springboot 3X + minio+aws-s3 + redis + mysql + mybatisplus

本文全部代码以上传gitee:https://gitee.com/luzhiyong_erfou/learning-notes/tree/master/aws-s3-upload

一、功能展示

上传功能点

  • 大文件分片上传
  • 文件秒传
  • 断点续传
  • 上传进度

下载功能点

  • 分片下载
  • 暂停下载
  • 下载进度

效果展示

在这里插入图片描述

二、思路流程

上传流程

一个文件的上传,对接后端的请求有三个

  • 点击上传时,请求 <检查文件 md5> 接口,判断文件的状态(已存在、未存在、传输部分)
  • 根据不同的状态,通过 <初始化分片上传地址>,得到该文件的分片地址
  • 前端将分片地址和分片文件一一对应进行上传,直接上传至对象存储
  • 上传完毕,调用 <合并文件> 接口,合并文件,文件数据入库
    在这里插入图片描述

整体步骤:

  • 前端计算文件 md5,并发请求查询此文件的状态
  • 若文件已上传,则后端直接返回上传成功,并返回 url 地址
  • 若文件未上传,则前端请求初始化分片接口,返回上传地址。循环将分片文件和分片地址一一对一应 若文件上传一部分,后端会返回该文件的uploadId (minio中的文件标识)和listParts(已上传的分片索引),前端请求初始化分片接口,后端重新生成上传地址。前端循环将已上传的分片过滤掉,未上传的分片和分片地址一一对应。
  • 前端通过分片地址将分片文件一一上传
  • 上传完毕后,前端调用合并分片接口
  • 后端判断该文件是单片还是分片,单片则不走合并,仅信息入库,分片则先合并,再信息入库。删除 redis 中的文件信息,返回文件地址。

下载流程

整体步骤:

  • 前端计算分片下载的请求次数并设置每次请求的偏移长度
  • 循环调用后端接口
  • 后端判断文件是否缓存并获取文件信息,根据前端传入的便宜长度和分片大小获取文件流返回前端
  • 前端记录每片的blob
  • 根据文件流转成的 blob 下载文件

在这里插入图片描述

三、代码示例

service

import cn.hutool.core.bean.BeanUtil;
import cn.hutool.core.date.DateUtil;
import cn.hutool.core.io.FileUtil;
import cn.hutool.core.util.StrUtil;
import cn.hutool.json.JSONUtil;
import cn.superlu.s3uploadservice.common.R;
import cn.superlu.s3uploadservice.config.FileProperties;
import cn.superlu.s3uploadservice.constant.FileHttpCodeEnum;
import cn.superlu.s3uploadservice.mapper.SysFileUploadMapper;
import cn.superlu.s3uploadservice.model.bo.FileUploadInfo;
import cn.superlu.s3uploadservice.model.entity.SysFileUpload;
import cn.superlu.s3uploadservice.model.vo.BaseFileVo;
import cn.superlu.s3uploadservice.model.vo.UploadUrlsVO;
import cn.superlu.s3uploadservice.service.SysFileUploadService;
import cn.superlu.s3uploadservice.utils.AmazonS3Util;
import cn.superlu.s3uploadservice.utils.MinioUtil;
import cn.superlu.s3uploadservice.utils.RedisUtil;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectInputStream;
import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper;
import com.baomidou.mybatisplus.extension.service.impl.ServiceImpl;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.servlet.http.HttpServletResponse;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Service;import java.io.BufferedOutputStream;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.time.LocalDateTime;
import java.util.List;
import java.util.concurrent.TimeUnit;@Service
@Slf4j
@RequiredArgsConstructor
public class SysFileUploadServiceImpl extends ServiceImpl<SysFileUploadMapper, SysFileUpload> implements SysFileUploadService {private static final Integer BUFFER_SIZE = 1024 * 64; // 64KBprivate final RedisUtil redisUtil;private final MinioUtil minioUtil;private final AmazonS3Util amazonS3Util;private final FileProperties fileProperties;/*** 检查文件是否存在* @param md5* @return*/@Overridepublic R<BaseFileVo<FileUploadInfo>> checkFileByMd5(String md5) {log.info("查询md5: <{}> 在redis是否存在", md5);FileUploadInfo fileUploadInfo = (FileUploadInfo)redisUtil.get(md5);if (fileUploadInfo != null) {log.info("查询到md5:在redis中存在:{}", JSONUtil.toJsonStr(fileUploadInfo));if(fileUploadInfo.getChunkCount()==1){return R.ok( BaseFileVo.builder(FileHttpCodeEnum.NOT_UPLOADED, null));}else{List<Integer> listParts = minioUtil.getListParts(fileUploadInfo.getObject(), fileUploadInfo.getUploadId());
//              List<Integer> listParts = amazonS3Util.getListParts(fileUploadInfo.getObject(), fileUploadInfo.getUploadId());fileUploadInfo.setListParts(listParts);return R.ok( BaseFileVo.builder(FileHttpCodeEnum.UPLOADING, fileUploadInfo));}}log.info("redis中不存在md5: <{}> 查询mysql是否存在", md5);SysFileUpload file = baseMapper.selectOne(new LambdaQueryWrapper<SysFileUpload>().eq(SysFileUpload::getMd5, md5));if (file != null) {log.info("mysql中存在md5: <{}> 的文件 该文件已上传至minio 秒传直接过", md5);FileUploadInfo dbFileInfo = BeanUtil.toBean(file, FileUploadInfo.class);return R.ok( BaseFileVo.builder(FileHttpCodeEnum.UPLOAD_SUCCESS, dbFileInfo));}return R.ok( BaseFileVo.builder(FileHttpCodeEnum.NOT_UPLOADED, null));}/*** 初始化文件分片地址及相关数据* @param fileUploadInfo* @return*/@Overridepublic R<BaseFileVo<UploadUrlsVO>> initMultipartUpload(FileUploadInfo fileUploadInfo) {log.info("查询md5: <{}> 在redis是否存在", fileUploadInfo.getMd5());FileUploadInfo redisFileUploadInfo = (FileUploadInfo)redisUtil.get(fileUploadInfo.getMd5());// 若 redis 中有该 md5 的记录,以 redis 中为主String object;if (redisFileUploadInfo != null) {fileUploadInfo = redisFileUploadInfo;object = redisFileUploadInfo.getObject();} else {String originFileName = fileUploadInfo.getOriginFileName();String suffix = FileUtil.extName(originFileName);String fileName = FileUtil.mainName(originFileName);// 对文件重命名,并以年月日文件夹格式存储String nestFile = DateUtil.format(LocalDateTime.now(), "yyyy/MM/dd");object = nestFile + "/" + fileName + "_" + fileUploadInfo.getMd5() + "." + suffix;fileUploadInfo.setObject(object).setType(suffix);}UploadUrlsVO urlsVO;// 单文件上传if (fileUploadInfo.getChunkCount() == 1) {log.info("当前分片数量 <{}> 单文件上传", fileUploadInfo.getChunkCount());
//            urlsVO = minioUtil.getUploadObjectUrl(fileUploadInfo.getContentType(), object);urlsVO=amazonS3Util.getUploadObjectUrl(fileUploadInfo.getContentType(), object);} else {// 分片上传log.info("当前分片数量 <{}> 分片上传", fileUploadInfo.getChunkCount());
//            urlsVO = minioUtil.initMultiPartUpload(fileUploadInfo, object);urlsVO = amazonS3Util.initMultiPartUpload(fileUploadInfo, object);}fileUploadInfo.setUploadId(urlsVO.getUploadId());// 存入 redis (单片存 redis 唯一用处就是可以让单片也入库,因为单片只有一个请求,基本不会出现问题)redisUtil.set(fileUploadInfo.getMd5(), fileUploadInfo, fileProperties.getOss().getBreakpointTime(), TimeUnit.DAYS);return R.ok(BaseFileVo.builder(FileHttpCodeEnum.SUCCESS, urlsVO));}/*** 合并分片* @param md5* @return*/@Overridepublic R<BaseFileVo<String>> mergeMultipartUpload(String md5) {FileUploadInfo redisFileUploadInfo = (FileUploadInfo)redisUtil.get(md5);String url = StrUtil.format("{}/{}/{}", fileProperties.getOss().getEndpoint(), fileProperties.getBucketName(), redisFileUploadInfo.getObject());SysFileUpload files = BeanUtil.toBean(redisFileUploadInfo, SysFileUpload.class);files.setUrl(url).setBucket(fileProperties.getBucketName()).setCreateTime(LocalDateTime.now());Integer chunkCount = redisFileUploadInfo.getChunkCount();// 分片为 1 ,不需要合并,否则合并后看返回的是 true 还是 falseboolean isSuccess = chunkCount == 1 || minioUtil.mergeMultipartUpload(redisFileUploadInfo.getObject(), redisFileUploadInfo.getUploadId());
//        boolean isSuccess = chunkCount == 1 || amazonS3Util.mergeMultipartUpload(redisFileUploadInfo.getObject(), redisFileUploadInfo.getUploadId());if (isSuccess) {baseMapper.insert(files);redisUtil.del(md5);return R.ok(BaseFileVo.builder(FileHttpCodeEnum.SUCCESS, url));}return R.ok(BaseFileVo.builder(FileHttpCodeEnum.UPLOAD_FILE_FAILED, null));}/*** 分片下载* @param id* @param request* @param response* @return* @throws IOException*/@Overridepublic ResponseEntity<byte[]> downloadMultipartFile(Long id, HttpServletRequest request, HttpServletResponse response) throws IOException {// redis 缓存当前文件信息,避免分片下载时频繁查库SysFileUpload file = null;SysFileUpload redisFile = (SysFileUpload)redisUtil.get(String.valueOf(id));if (redisFile == null) {SysFileUpload dbFile = baseMapper.selectById(id);if (dbFile == null) {return null;} else {file = dbFile;redisUtil.set(String.valueOf(id), file, 1, TimeUnit.DAYS);}} else {file = redisFile;}String range = request.getHeader("Range");String fileName = file.getOriginFileName();log.info("下载文件的 object <{}>", file.getObject());// 获取 bucket 桶中的文件元信息,获取不到会抛出异常
//        StatObjectResponse objectResponse = minioUtil.statObject(file.getObject());S3Object s3Object = amazonS3Util.statObject(file.getObject());long startByte = 0; // 开始下载位置
//        long fileSize = objectResponse.size();long fileSize = s3Object.getObjectMetadata().getContentLength();long endByte = fileSize - 1; // 结束下载位置log.info("文件总长度:{},当前 range:{}", fileSize, range);BufferedOutputStream os = null; // buffer 写入流
//        GetObjectResponse stream = null; // minio 文件流// 存在 range,需要根据前端下载长度进行下载,即分段下载// 例如:range=bytes=0-52428800if (range != null && range.contains("bytes=") && range.contains("-")) {range = range.substring(range.lastIndexOf("=") + 1).trim(); // 0-52428800String[] ranges = range.split("-");// 判断range的类型if (ranges.length == 1) {// 类型一:bytes=-2343 后端转换为 0-2343if (range.startsWith("-")) endByte = Long.parseLong(ranges[0]);// 类型二:bytes=2343- 后端转换为 2343-最后if (range.endsWith("-")) startByte = Long.parseLong(ranges[0]);} else if (ranges.length == 2) { // 类型三:bytes=22-2343startByte = Long.parseLong(ranges[0]);endByte = Long.parseLong(ranges[1]);}}// 要下载的长度// 确保返回的 contentLength 不会超过文件的实际剩余大小long contentLength = Math.min(endByte - startByte + 1, fileSize - startByte);// 文件类型String contentType = request.getServletContext().getMimeType(fileName);// 解决下载文件时文件名乱码问题byte[] fileNameBytes = fileName.getBytes(StandardCharsets.UTF_8);fileName = new String(fileNameBytes, 0, fileNameBytes.length, StandardCharsets.ISO_8859_1);// 响应头设置---------------------------------------------------------------------------------------------// 断点续传,获取部分字节内容:response.setHeader("Accept-Ranges", "bytes");// http状态码要为206:表示获取部分内容,SC_PARTIAL_CONTENT,若部分浏览器不支持,改成 SC_OKresponse.setStatus(HttpServletResponse.SC_PARTIAL_CONTENT);response.setContentType(contentType);
//        response.setHeader("Last-Modified", objectResponse.lastModified().toString());response.setHeader("Last-Modified", s3Object.getObjectMetadata().getLastModified().toString());response.setHeader("Content-Disposition", "attachment;filename=" + fileName);response.setHeader("Content-Length", String.valueOf(contentLength));// Content-Range,格式为:[要下载的开始位置]-[结束位置]/[文件总大小]response.setHeader("Content-Range", "bytes " + startByte + "-" + endByte + "/" + fileSize);
//        response.setHeader("ETag", "\"".concat(objectResponse.etag()).concat("\""));response.setHeader("ETag", "\"".concat(s3Object.getObjectMetadata().getETag()).concat("\""));response.setContentType("application/octet-stream;charset=UTF-8");S3ObjectInputStream objectInputStream=null;try {// 获取文件流String object = s3Object.getKey();S3Object currentObject = amazonS3Util.getObject(object, startByte, contentLength);objectInputStream = currentObject.getObjectContent();
//            stream = minioUtil.getObject(objectResponse.object(), startByte, contentLength);os = new BufferedOutputStream(response.getOutputStream());// 将读取的文件写入到 OutputStreambyte[] bytes = new byte[BUFFER_SIZE];long bytesWritten = 0;int bytesRead = -1;while ((bytesRead = objectInputStream.read(bytes)) != -1) {
//            while ((bytesRead = stream.read(bytes)) != -1) {if (bytesWritten + bytesRead >= contentLength) {os.write(bytes, 0, (int)(contentLength - bytesWritten));break;} else {os.write(bytes, 0, bytesRead);bytesWritten += bytesRead;}}os.flush();response.flushBuffer();// 返回对应http状态return new ResponseEntity<>(bytes, HttpStatus.OK);} catch (Exception e) {e.printStackTrace();} finally {if (os != null) os.close();
//            if (stream != null) stream.close();if (objectInputStream != null) objectInputStream.close();}return null;}@Overridepublic R<List<SysFileUpload>> getFileList() {List<SysFileUpload> filesList = this.list();return R.ok(filesList);}}

AmazonS3Util


import cn.hutool.core.util.IdUtil;
import cn.superlu.s3uploadservice.config.FileProperties;
import cn.superlu.s3uploadservice.constant.FileHttpCodeEnum;
import cn.superlu.s3uploadservice.model.bo.FileUploadInfo;
import cn.superlu.s3uploadservice.model.vo.UploadUrlsVO;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.HttpMethod;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import com.google.common.collect.HashMultimap;
import io.minio.GetObjectArgs;
import io.minio.GetObjectResponse;
import io.minio.StatObjectArgs;
import io.minio.StatObjectResponse;
import jakarta.annotation.PostConstruct;
import jakarta.annotation.Resource;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Component;import java.net.URL;
import java.util.*;
import java.util.stream.Collectors;@Slf4j
@Component
public class AmazonS3Util {@Resourceprivate FileProperties fileProperties;private AmazonS3 amazonS3;// spring自动注入会失败@PostConstructpublic void init() {ClientConfiguration clientConfiguration = new ClientConfiguration();clientConfiguration.setMaxConnections(100);AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(fileProperties.getOss().getEndpoint(), fileProperties.getOss().getRegion());AWSCredentials awsCredentials = new BasicAWSCredentials(fileProperties.getOss().getAccessKey(),fileProperties.getOss().getSecretKey());AWSCredentialsProvider awsCredentialsProvider = new AWSStaticCredentialsProvider(awsCredentials);this.amazonS3 = AmazonS3ClientBuilder.standard().withEndpointConfiguration(endpointConfiguration).withClientConfiguration(clientConfiguration).withCredentials(awsCredentialsProvider).disableChunkedEncoding().withPathStyleAccessEnabled(true).build();}/*** 获取 Minio 中已经上传的分片文件* @param object 文件名称* @param uploadId 上传的文件id(由 minio 生成)* @return List<Integer>*/@SneakyThrowspublic List<Integer> getListParts(String object, String uploadId) {ListPartsRequest listPartsRequest = new ListPartsRequest( fileProperties.getBucketName(), object, uploadId);PartListing listParts = amazonS3.listParts(listPartsRequest);return listParts.getParts().stream().map(PartSummary::getPartNumber).collect(Collectors.toList());}/*** 单文件签名上传* @param object 文件名称(uuid 格式)* @return UploadUrlsVO*/public UploadUrlsVO getUploadObjectUrl(String contentType, String object) {try {log.info("<{}> 开始单文件上传<>", object);UploadUrlsVO urlsVO = new UploadUrlsVO();List<String> urlList = new ArrayList<>();// 主要是针对图片,若需要通过浏览器直接查看,而不是下载,需要指定对应的 content-typeHashMultimap<String, String> headers = HashMultimap.create();if (contentType == null || contentType.equals("")) {contentType = "application/octet-stream";}headers.put("Content-Type", contentType);String uploadId = IdUtil.simpleUUID();Map<String, String> reqParams = new HashMap<>();reqParams.put("uploadId", uploadId);//生成预签名的 URLGeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(fileProperties.getBucketName(),object, HttpMethod.PUT);generatePresignedUrlRequest.addRequestParameter("uploadId", uploadId);URL url = amazonS3.generatePresignedUrl(generatePresignedUrlRequest);urlList.add(url.toString());urlsVO.setUploadId(uploadId).setUrls(urlList);return urlsVO;} catch (Exception e) {log.error("单文件上传失败: {}", e.getMessage());throw new RuntimeException(FileHttpCodeEnum.UPLOAD_FILE_FAILED.getMsg());}}/*** 初始化分片上传* @param fileUploadInfo 前端传入的文件信息* @param object object* @return UploadUrlsVO*/public UploadUrlsVO initMultiPartUpload(FileUploadInfo fileUploadInfo, String object) {Integer chunkCount = fileUploadInfo.getChunkCount();String contentType = fileUploadInfo.getContentType();String uploadId = fileUploadInfo.getUploadId();log.info("文件<{}> - 分片<{}> 初始化分片上传数据 请求头 {}", object, chunkCount, contentType);UploadUrlsVO urlsVO = new UploadUrlsVO();try {// 如果初始化时有 uploadId,说明是断点续传,不能重新生成 uploadIdif (uploadId == null || uploadId.equals("")) {// 第一步,初始化,声明下面将有一个 Multipart Upload// 设置文件类型ObjectMetadata metadata = new ObjectMetadata();metadata.setContentType(contentType);InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(fileProperties.getBucketName(),object, metadata);uploadId = amazonS3.initiateMultipartUpload(initRequest).getUploadId();log.info("没有uploadId,生成新的{}",uploadId);}urlsVO.setUploadId(uploadId);List<String> partList = new ArrayList<>();for (int i = 1; i <= chunkCount; i++) {//生成预签名的 URL//设置过期时间,例如 1 小时后Date expiration = new Date(System.currentTimeMillis() + 3600 * 1000);GeneratePresignedUrlRequest generatePresignedUrlRequest =new GeneratePresignedUrlRequest(fileProperties.getBucketName(), object,HttpMethod.PUT).withExpiration(expiration);generatePresignedUrlRequest.addRequestParameter("uploadId", uploadId);generatePresignedUrlRequest.addRequestParameter("partNumber", String.valueOf(i));URL url = amazonS3.generatePresignedUrl(generatePresignedUrlRequest);partList.add(url.toString());}log.info("文件初始化分片成功");urlsVO.setUrls(partList);return urlsVO;} catch (Exception e) {log.error("初始化分片上传失败: {}", e.getMessage());// 返回 文件上传失败throw new RuntimeException(FileHttpCodeEnum.UPLOAD_FILE_FAILED.getMsg());}}/*** 合并文件* @param object object* @param uploadId uploadUd*/@SneakyThrowspublic boolean mergeMultipartUpload(String object, String uploadId) {log.info("通过 <{}-{}-{}> 合并<分片上传>数据", object, uploadId, fileProperties.getBucketName());//构建查询parts条件ListPartsRequest listPartsRequest = new ListPartsRequest(fileProperties.getBucketName(),object,uploadId);listPartsRequest.setMaxParts(1000);listPartsRequest.setPartNumberMarker(0);//请求查询PartListing partList=amazonS3.listParts(listPartsRequest);List<PartSummary> parts = partList.getParts();if (parts==null|| parts.isEmpty()) {// 已上传分块数量与记录中的数量不对应,不能合并分块throw new RuntimeException("分片缺失,请重新上传");}// 合并分片CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(fileProperties.getBucketName(),object,uploadId,parts.stream().map(partSummary -> new PartETag(partSummary.getPartNumber(), partSummary.getETag())).collect(Collectors.toList()));amazonS3.completeMultipartUpload(compRequest);return true;}/*** 获取文件内容和元信息,该文件不存在会抛异常* @param object object* @return StatObjectResponse*/@SneakyThrowspublic S3Object statObject(String object) {return amazonS3.getObject(fileProperties.getBucketName(), object);}@SneakyThrowspublic S3Object getObject(String object, Long offset, Long contentLength) {GetObjectRequest request = new GetObjectRequest(fileProperties.getBucketName(), object);request.setRange(offset, offset + contentLength - 1);  // 设置偏移量和长度return amazonS3.getObject(request);}}

minioUtil

import cn.hutool.core.util.IdUtil;
import cn.superlu.s3uploadservice.config.CustomMinioClient;
import cn.superlu.s3uploadservice.config.FileProperties;
import cn.superlu.s3uploadservice.constant.FileHttpCodeEnum;
import cn.superlu.s3uploadservice.model.bo.FileUploadInfo;
import cn.superlu.s3uploadservice.model.vo.UploadUrlsVO;
import com.google.common.collect.HashMultimap;
import io.minio.*;
import io.minio.http.Method;
import io.minio.messages.Part;
import jakarta.annotation.PostConstruct;
import jakarta.annotation.Resource;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Component;import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;@Slf4j
@Component
public class MinioUtil {private CustomMinioClient customMinioClient;@Resourceprivate FileProperties fileProperties;// spring自动注入会失败@PostConstructpublic void init() {MinioAsyncClient minioClient = MinioAsyncClient.builder().endpoint(fileProperties.getOss().getEndpoint()).credentials(fileProperties.getOss().getAccessKey(), fileProperties.getOss().getSecretKey()).build();customMinioClient = new CustomMinioClient(minioClient);}/*** 获取 Minio 中已经上传的分片文件* @param object 文件名称* @param uploadId 上传的文件id(由 minio 生成)* @return List<Integer>*/@SneakyThrowspublic List<Integer> getListParts(String object, String uploadId) {ListPartsResponse partResult = customMinioClient.listMultipart(fileProperties.getBucketName(), null, object, 1000, 0, uploadId, null, null);return partResult.result().partList().stream().map(Part::partNumber).collect(Collectors.toList());}/*** 单文件签名上传* @param object 文件名称(uuid 格式)* @return UploadUrlsVO*/public UploadUrlsVO getUploadObjectUrl(String contentType, String object) {try {log.info("<{}> 开始单文件上传<minio>", object);UploadUrlsVO urlsVO = new UploadUrlsVO();List<String> urlList = new ArrayList<>();// 主要是针对图片,若需要通过浏览器直接查看,而不是下载,需要指定对应的 content-typeHashMultimap<String, String> headers = HashMultimap.create();if (contentType == null || contentType.equals("")) {contentType = "application/octet-stream";}headers.put("Content-Type", contentType);String uploadId = IdUtil.simpleUUID();Map<String, String> reqParams = new HashMap<>();reqParams.put("uploadId", uploadId);String url = customMinioClient.getPresignedObjectUrl(GetPresignedObjectUrlArgs.builder().method(Method.PUT).bucket(fileProperties.getBucketName()).object(object).extraHeaders(headers).extraQueryParams(reqParams).expiry(fileProperties.getOss().getExpiry(), TimeUnit.DAYS).build());urlList.add(url);urlsVO.setUploadId(uploadId).setUrls(urlList);return urlsVO;} catch (Exception e) {log.error("单文件上传失败: {}", e.getMessage());throw new RuntimeException(FileHttpCodeEnum.UPLOAD_FILE_FAILED.getMsg());}}/*** 初始化分片上传* @param fileUploadInfo 前端传入的文件信息* @param object object* @return UploadUrlsVO*/public UploadUrlsVO initMultiPartUpload(FileUploadInfo fileUploadInfo, String object) {Integer chunkCount = fileUploadInfo.getChunkCount();String contentType = fileUploadInfo.getContentType();String uploadId = fileUploadInfo.getUploadId();log.info("文件<{}> - 分片<{}> 初始化分片上传数据 请求头 {}", object, chunkCount, contentType);UploadUrlsVO urlsVO = new UploadUrlsVO();try {HashMultimap<String, String> headers = HashMultimap.create();if (contentType == null || contentType.equals("")) {contentType = "application/octet-stream";}headers.put("Content-Type", contentType);// 如果初始化时有 uploadId,说明是断点续传,不能重新生成 uploadIdif (fileUploadInfo.getUploadId() == null || fileUploadInfo.getUploadId().equals("")) {uploadId = customMinioClient.initMultiPartUpload(fileProperties.getBucketName(), null, object, headers, null);}urlsVO.setUploadId(uploadId);List<String> partList = new ArrayList<>();Map<String, String> reqParams = new HashMap<>();reqParams.put("uploadId", uploadId);for (int i = 1; i <= chunkCount; i++) {reqParams.put("partNumber", String.valueOf(i));String uploadUrl = customMinioClient.getPresignedObjectUrl(GetPresignedObjectUrlArgs.builder().method(Method.PUT).bucket(fileProperties.getBucketName()).object(object).expiry(1, TimeUnit.DAYS).extraQueryParams(reqParams).build());partList.add(uploadUrl);}log.info("文件初始化分片成功");urlsVO.setUrls(partList);return urlsVO;} catch (Exception e) {log.error("初始化分片上传失败: {}", e.getMessage());// 返回 文件上传失败throw new RuntimeException(FileHttpCodeEnum.UPLOAD_FILE_FAILED.getMsg());}}/*** 合并文件* @param object object* @param uploadId uploadUd*/@SneakyThrowspublic boolean mergeMultipartUpload(String object, String uploadId) {log.info("通过 <{}-{}-{}> 合并<分片上传>数据", object, uploadId, fileProperties.getBucketName());//目前仅做了最大1000分片Part[] parts = new Part[1000];// 查询上传后的分片数据ListPartsResponse partResult = customMinioClient.listMultipart(fileProperties.getBucketName(), null, object, 1000, 0, uploadId, null, null);int partNumber = 1;for (Part part : partResult.result().partList()) {parts[partNumber - 1] = new Part(partNumber, part.etag());partNumber++;}// 合并分片customMinioClient.mergeMultipartUpload(fileProperties.getBucketName(), null, object, uploadId, parts, null, null);return true;}/*** 获取文件内容和元信息,该文件不存在会抛异常* @param object object* @return StatObjectResponse*/@SneakyThrowspublic StatObjectResponse statObject(String object) {return customMinioClient.statObject(StatObjectArgs.builder().bucket(fileProperties.getBucketName()).object(object).build()).get();}@SneakyThrowspublic GetObjectResponse getObject(String object, Long offset, Long contentLength) {return customMinioClient.getObject(GetObjectArgs.builder().bucket(fileProperties.getBucketName()).object(object).offset(offset).length(contentLength).build()).get();}}

四、疑问

我在全部使用aws-s3上传时出现一个问题至今没有办法解决。只能在查询分片的时候用minio的包进行。

分片后调用amazonS3.listParts()一直超时

这个问题我在
https://gitee.com/Gary2016/minio-upload/issues/I8H8GM
也看到有人跟我有相同的问题

有解决的朋友麻烦评论区告知下方法。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/377024.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

方便好用的C#.Net万能工具库Masuit.Tools

文章目录 简介开发环境安装使用特色功能示例代码1. 检验字符串是否是Email、手机号、URL、IP地址、身份证号等2.硬件监测(需要管理员权限&#xff0c;仅支持Windows&#xff0c;部分函数仅支持物理机模式)3.html的防XSS处理&#xff1a;4.整理Windows系统的内存&#xff1a;5.任…

IAR全面支持芯驰科技E3系列车规MCU产品E3119/E3118

中国上海&#xff0c;2024年7月11日 — 全球领先的嵌入式系统开发软件解决方案供应商IAR与全场景智能车芯引领者芯驰科技宣布进一步扩大合作&#xff0c;最新版IAR Embedded Workbench for Arm已全面支持芯驰科技的E3119/E3118车规级MCU产品。IAR与芯驰科技有着悠久的合作历史&…

docker私有仓库harbor安装

Harbor默认安装 下载harbor https://github.com/goharbor/harbor/releases/download/v2.11.0/harbor-offline-installer-v2.11.0.tgz 目前要求docker版本&#xff0c;docker 20.10.10-ce &#xff0c;和docker-compose 1.18.0 查看 docker-compose版本 docker-compose --ver…

【精品资料】模块化数据中心解决方案(33页PPT)

引言&#xff1a;模块化数据中心解决方案是一种创新的数据中心设计和部署策略&#xff0c;旨在提高数据中心的灵活性、可扩展性和效率。这种方案通过将数据中心的基础设施、计算、存储和网络资源封装到标准化的模块中&#xff0c;实现了快速部署、易于管理和高效运维的目标 方案…

Jetpack Compose学习记录(一)

目录 前言控件实时预览Modifierremember状态提升 前言 学了一段时间的Compose&#xff0c;不得不说声明式UI比原生的开发效率快很多&#xff0c;而且Compose也是Google现在主推的开发模式&#xff0c;可以动态化地更改ui&#xff0c;相比于databinding对数据和布局进行绑定。C…

51单片机4(reg52头文件介绍)

一、头文件作用 #include<reg52.h> #include"reg52.h" 1、在代码中&#xff0c;引用头文件&#xff0c;它的实际意义就是将这个头文件里面的全部内容放到引用的这个文件的位置上&#xff0c;免去我们每次编译&#xff0c;编写同类程序都要将头文件中的语句&…

部署运维之二:虚拟化

摘要&#xff1a; 在21世纪初的曙光中&#xff0c;虚拟化技术悄然萌芽&#xff0c;标志着计算领域的一次革命性飞跃。这一时期&#xff0c;通过引入虚拟化技术&#xff0c;业界实现了在单一物理服务器之上并行运行多个虚拟机的壮举&#xff0c;每个虚拟机均构筑起一个隔离而独…

JVM:SpringBoot TomcatEmbeddedWebappClassLoader

文章目录 一、介绍二、SpringBoot中TomcatEmbeddedWebappClassLoader与LaunchedURLClassLoader的关系 一、介绍 TomcatEmbeddedWebappClassLoader 是 Spring Boot 在其内嵌 Tomcat 容器中使用的一个类加载器&#xff08;ClassLoader&#xff09;。在 Spring Boot 应用中&#…

Vue中插槽的使用

插槽是什么&#xff1f; 插槽就是子组件中的提供给父组件使用的一个占位符&#xff0c;用<slot></slot> 表示&#xff0c;父组件可以在这个占位符中填充任何模板代码&#xff0c;如 HTML、组件等&#xff0c;填充的内容会替换子组件的<slot></slot>标…

AG32 的MCU与FPGA的主频可以达到568MHz吗

Customers: AG32/ AGRV2K 这个芯片主频和定时器最高速度是多少&#xff1f;用户期望 CPLD计时器功能0.1ns以下。 AGM RE: CPLD做不到 0.1ns的速率&#xff0c;这个需要10G以上的时钟。 那AGRV2K最高多少MHz呢&#xff1f; 一般200MHZ比较容易实现。 进一步说明&#xff1…

具有 0.5V 超低输入电压的 3A 升压转换器TPS61021

1 特性 输入电压范围&#xff1a;0.5V 至 4.4V 启动时的最小输入电压为 0.9V 可设置的输出电压范围&#xff1a;1.8V 到 4.0V 效率高达 91%&#xff08;VIN 2.4V、VOUT 3.3V 且 IOUT 1.5A 时&#xff09; 2.0MHz 开关频率 IOUT > 1.5A&#xff0c;VOUT 3.3V&#xff08;V…

记录一次微信小程序申诉定位权限过程

1 小程序接到通知&#xff0c;检测到违规&#xff0c;需要及时处理&#xff0c;给一周的缓冲时间&#xff0c;如果到期未处理&#xff0c;会封禁能力&#xff08;2023-11-17&#xff09; 2 到期后&#xff0c;仍未处理&#xff0c;封禁能力&#xff08;2023-11-24&#xff09; …

密钥管理的流程有哪些

密钥管理是指对密钥进行生成、分发、验证、更新、存储、备份、设置有效期以及销毁等一系列操作的行为&#xff0c;它是保障数据安全的重要机制。以下是对密钥管理的详细解析&#xff1a; 一、密钥管理的概念 密钥&#xff0c;即密匙&#xff0c;是各种加密技术的核心组成部分&a…

FPGA入门-自用

写代码&#xff0c;并将引脚对应到板子相应的引脚上 下载程序到板子上 遇到错误了&#xff0c;不按想的来的了&#xff0c;进行仿真 查看网表图查看问题所在 简化了一些步骤&#xff1a;未使用引脚的设置&#xff0c;电压设置&#xff1b; 通过画网表结构图来构成电路 时钟 …

RocketMQ源码学习笔记:Producer发送消息流程

这是本人学习的总结&#xff0c;主要学习资料如下 马士兵教育rocketMq官方文档 目录 1、Overview2、验证消息3、查找路由4、选择消息发送队列4.1、选择队列的策略4.2、源码阅读4.2.1、轮询规避4.2.2、故障延迟规避4.2.2.1、计算规避时间4.2.2.2、选择队列 4.2.3、ThreadLocal的…

土壤分析仪:解密土壤之奥秘的科技先锋

在农业生产和生态保护的道路上&#xff0c;土壤的质量与状况一直是我们关注的焦点。土壤分析仪&#xff0c;作为现代科技在农业和环保领域的杰出代表&#xff0c;以其高效、精准的分析能力&#xff0c;为我们揭示了土壤的奥秘&#xff0c;为农业生产提供了科学指导&#xff0c;…

一个spring boot项目的启动过程分析

1、web.xml 定义入口类 <context-param><param-name>contextConfigLocation</param-name><param-value>com.baosight.ApplicationBoot</param-value> </context-param> 2、主入口类: ApplicationBoot,SpringBoot项目的mian函数 SpringBo…

阿里云搭建vps服务器的过程

最近突发奇想想要搭建一个阿里云的的vps服务器&#xff0c;下面是搭建的过程&#xff1a; 首先&#xff0c;登录阿里云网站&#xff1a; 搜索&#xff0c;esc控制台&#xff1a; 点击创建实例&#xff1a; 选择地区&#xff1a; 选择实例规格&#xff1a; 选择镜像&#x…

adminPage-vue3依赖FormPage说明文档,表单页快速开发,使用思路及范例(Ⅱ)formConfig基础配置项

adminPage-vue3依赖FormPage说明文档&#xff0c;表单页快速开发&#xff0c;使用思路及范例&#xff08;Ⅱ&#xff09;formConfig配置项 属性: formConfig&#xff08;表单项设置&#xff09;keylabelnoLabeldefaultValuebindchildSlottypeString类型数据&#xff08;除 time…

ArcGIS识别不GDB文件地理数据库显示为空?

​ 点击下方全系列课程学习 点击学习—>ArcGIS全系列实战视频教程——9个单一课程组合系列直播回放 点击学习——>遥感影像综合处理4大遥感软件ArcGISENVIErdaseCognition 我们经常会碰到拷贝的GDB文件ArcGIS无法识别&#xff0c;软件只是把他当做普通的文件夹去看待&am…