新黑马头条项目经验(黑马)

 



 


swagger

(1)简介

Swagger 是一个规范和完整的框架,用于生成、描述、调用和可视化 RESTful 风格的 Web 服务(API Documentation & Design Tools for Teams | Swagger)。 它的主要作用是:

  1. 使得前后端分离开发更加方便,有利于团队协作

  2. 接口的文档在线自动生成,降低后端开发人员编写接口文档的负担

  3. 功能测试

    Spring已经将Swagger纳入自身的标准,建立了Spring-swagger项目,现在叫Springfox。通过在项目中引入Springfox ,即可非常简单快捷的使用Swagger。

(2)SpringBoot集成Swagger

  • 引入依赖,在heima-leadnews-model和heima-leadnews-common模块中引入该依赖

    <dependency><groupId>io.springfox</groupId><artifactId>springfox-swagger2</artifactId>
    </dependency>
    <dependency><groupId>io.springfox</groupId><artifactId>springfox-swagger-ui</artifactId>
    </dependency>

只需要在heima-leadnews-common中进行配置即可,因为其他微服务工程都直接或间接依赖即可。

  • 在heima-leadnews-common工程中添加一个配置类

新增:com.heima.common.swagger.SwaggerConfiguration

package com.heima.common.swagger;
​
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import springfox.documentation.builders.ApiInfoBuilder;
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.service.ApiInfo;
import springfox.documentation.service.Contact;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;
​
@Configuration
@EnableSwagger2
public class SwaggerConfiguration {
​@Beanpublic Docket buildDocket() {return new Docket(DocumentationType.SWAGGER_2).apiInfo(buildApiInfo()).select()// 要扫描的API(Controller)基础包.apis(RequestHandlerSelectors.basePackage("com.heima")).paths(PathSelectors.any()).build();}
​private ApiInfo buildApiInfo() {Contact contact = new Contact("黑马程序员","","");return new ApiInfoBuilder().title("黑马头条-平台管理API文档").description("黑马头条后台api").contact(contact).version("1.0.0").build();}
}

在heima-leadnews-common模块中的resources目录中新增以下目录和文件

文件:resources/META-INF/Spring.factories

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\com.heima.common.swagger.SwaggerConfiguration

(3)Swagger常用注解

在Java类中添加Swagger的注解即可生成Swagger接口文档,常用Swagger注解如下:

@Api:修饰整个类,描述Controller的作用

@ApiOperation:描述一个类的一个方法,或者说一个接口

@ApiParam:单个参数的描述信息

@ApiModel:用对象来接收参数

@ApiModelProperty:用对象接收参数时,描述对象的一个字段

@ApiResponse:HTTP响应其中1个描述

@ApiResponses:HTTP响应整体描述

@ApiIgnore:使用该注解忽略这个API

@ApiError :发生错误返回的信息

@ApiImplicitParam:一个请求参数

@ApiImplicitParams:多个请求参数的描述信息

@ApiImplicitParam属性:

属性取值作用
paramType查询参数类型
path以地址的形式提交数据
query直接跟参数完成自动映射赋值
body以流的形式提交 仅支持POST
header参数在request headers 里边提交
form以form表单的形式提交 仅支持POST
dataType参数的数据类型 只作为标志说明,并没有实际验证
Long
String
name接收参数名
value接收参数的意义描述
required参数是否必填
true必填
false非必填
defaultValue默认值

我们在ApUserLoginController中添加Swagger注解,代码如下所示:

@RestController
@RequestMapping("/api/v1/login")
@Api(value = "app端用户登录", tags = "ap_user", description = "app端用户登录API")
public class ApUserLoginController {
​@Autowiredprivate ApUserService apUserService;
​@PostMapping("/login_auth")@ApiOperation("用户登录")public ResponseResult login(@RequestBody LoginDto dto){return apUserService.login(dto);}
}

LoginDto

@Data
public class LoginDto {
​/*** 手机号*/@ApiModelProperty(value="手机号",required = true)private String phone;
​/*** 密码*/@ApiModelProperty(value="密码",required = true)private String password;
}

启动user微服务,访问地址:http://localhost:51801/swagger-ui.html


knife4j

(1)简介

knife4j是为Java MVC框架集成Swagger生成Api文档的增强解决方案,前身是swagger-bootstrap-ui,取名kni4j是希望它能像一把匕首一样小巧,轻量,并且功能强悍!

gitee地址:knife4j: Knife4j是一个集Swagger2 和 OpenAPI3为一体的增强解决方案

官方文档:Knife4j · 集Swagger2及OpenAPI3为一体的增强解决方案. | Knife4j

效果演示:http://knife4j.xiaominfo.com/doc.html

(2)核心功能

该UI增强包主要包括两大核心功能:文档说明 和 在线调试

  • 文档说明:根据Swagger的规范说明,详细列出接口文档的说明,包括接口地址、类型、请求示例、请求参数、响应示例、响应参数、响应码等信息,使用swagger-bootstrap-ui能根据该文档说明,对该接口的使用情况一目了然。

  • 在线调试:提供在线接口联调的强大功能,自动解析当前接口参数,同时包含表单验证,调用参数可返回接口响应内容、headers、Curl请求命令实例、响应时间、响应状态码等信息,帮助开发者在线调试,而不必通过其他测试工具测试接口是否正确,简介、强大。

  • 个性化配置:通过个性化ui配置项,可自定义UI的相关显示信息

  • 离线文档:根据标准规范,生成的在线markdown离线文档,开发者可以进行拷贝生成markdown接口文档,通过其他第三方markdown转换工具转换成html或pdf,这样也可以放弃swagger2markdown组件

  • 接口排序:自1.8.5后,ui支持了接口排序功能,例如一个注册功能主要包含了多个步骤,可以根据swagger-bootstrap-ui提供的接口排序规则实现接口的排序,step化接口操作,方便其他开发者进行接口对接

(3)快速集成

  • 在heima-leadnews-common模块中的pom.xml文件中引入knife4j的依赖,如下:

<dependency><groupId>com.github.xiaoymin</groupId><artifactId>knife4j-spring-boot-starter</artifactId>
</dependency>
  • 创建Swagger配置文件

在heima-leadnews-common模块中新建配置类

新建Swagger的配置文件SwaggerConfiguration.java文件,创建springfox提供的Docket分组对象,代码如下:

package com.heima.common.knife4j;
​
import com.github.xiaoymin.knife4j.spring.annotations.EnableKnife4j;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Import;
import springfox.bean.validators.configuration.BeanValidatorPluginsConfiguration;
import springfox.documentation.builders.ApiInfoBuilder;
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.service.ApiInfo;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;
​
@Configuration
@EnableSwagger2
@EnableKnife4j
@Import(BeanValidatorPluginsConfiguration.class)
public class Swagger2Configuration {
​@Bean(value = "defaultApi2")public Docket defaultApi2() {Docket docket=new Docket(DocumentationType.SWAGGER_2).apiInfo(apiInfo())//分组名称.groupName("1.0").select()//这里指定Controller扫描包路径.apis(RequestHandlerSelectors.basePackage("com.heima")).paths(PathSelectors.any()).build();return docket;}private ApiInfo apiInfo() {return new ApiInfoBuilder().title("黑马头条API文档").description("黑马头条API文档").version("1.0").build();}
}

以上有两个注解需要特别说明,如下表:

注解说明
@EnableSwagger2该注解是Springfox-swagger框架提供的使用Swagger注解,该注解必须加
@EnableKnife4j该注解是knife4j提供的增强注解,Ui提供了例如动态参数、参数过滤、接口排序等增强功能,如果你想使用这些增强功能就必须加该注解,否则可以不用加
  • 添加配置

在Spring.factories中新增配置

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\com.heima.common.swagger.Swagger2Configuration, \com.heima.common.swagger.SwaggerConfiguration
  • 访问

在浏览器输入地址:http://host:port/doc.html



 

 

 

 

 

 

 

 

 

 

 

 HelloController

@Controller
public class HelloController {@GetMapping("/basic")public String hello(Model model){//name//model.addAttribute("name","freemarker");//stuStudent student = new Student();student.setName("小明");student.setAge(18);model.addAttribute("stu",student);return "01-basic";}@GetMapping("/list")public String list(Model model){//------------------------------------Student stu1 = new Student();stu1.setName("小强");stu1.setAge(18);stu1.setMoney(1000.86f);stu1.setBirthday(new Date());//小红对象模型数据Student stu2 = new Student();stu2.setName("小红");stu2.setMoney(200.1f);stu2.setAge(19);//将两个对象模型数据存放到List集合中List<Student> stus = new ArrayList<>();stus.add(stu1);stus.add(stu2);//向model中存放List集合数据model.addAttribute("stus",stus);//------------------------------------//创建Map数据HashMap<String,Student> stuMap = new HashMap<>();stuMap.put("stu1",stu1);stuMap.put("stu2",stu2);// 3.1 向model中存放Map数据model.addAttribute("stuMap", stuMap);//日期model.addAttribute("today",new Date());//长数值model.addAttribute("point",323213123132312L);return "02-list";}
}

01-basic.ftl 

<!DOCTYPE html>
<html>
<head><meta charset="utf-8"><title>Hello World!</title>
</head>
<body>
<b>普通文本 String 展示:</b><br><br>
Hello ${name!''} <br>
<hr>
<b>对象Student中的数据展示:</b><br/>
姓名:${stu.name}<br/>
年龄:${stu.age}
<hr>
</body>
</html>

 02-list.ftl 

<!DOCTYPE html>
<html>
<head><meta charset="utf-8"><title>Hello World!</title>
</head>
<body><#-- list 数据的展示 -->
<b>展示list中的stu数据:</b>
<br>
<br>
<table><tr><td>序号</td><td>姓名</td><td>年龄</td><td>钱包</td></tr><#if stus??><#list stus as stu><#if stu.name='小红'><tr style="color: red"><td>${stu_index+1}</td><td>${stu.name}</td><td>${stu.age}</td><td>${stu.money}</td></tr><#else><tr><td>${stu_index+1}</td><td>${stu.name}</td><td>${stu.age}</td><td>${stu.money}</td></tr></#if></#list></#if>stu集合的大小:${stus?size}<br/>
</table><hr>
<#-- Map 数据的展示 -->
<b>map数据的展示:</b>
<br/><br/>
<a href="###">方式一:通过map['keyname'].property</a><br/>
输出stu1的学生信息:<br/>
姓名:${stuMap['stu1'].name}<br/>
年龄:${stuMap['stu1'].age}<br/>
<br/>
<a href="###">方式二:通过map.keyname.property</a><br/>
输出stu2的学生信息:<br/>
姓名:${stuMap.stu2.name}<br/>
年龄:${stuMap.stu2.age}<br/><br/>
<a href="###">遍历map中两个学生信息:</a><br/>
<table><tr><td>序号</td><td>姓名</td><td>年龄</td><td>钱包</td></tr><#list stuMap?keys as key ><tr><td>${key_index+1}</td><td>${stuMap[key].name}</td><td>${stuMap[key].age}</td><td>${stuMap[key].money}</td></tr></#list>
</table>
<hr>
当前的日期为:${today?datetime}<br/>
当前的日期为:${today?string("yyyy年MM月")}
--------------------------<br>
${point?c}
</body>
</html>

@SpringBootTest(classes = FreemarkerDemoApplication.class)
@RunWith(SpringRunner.class)
public class FreemarkerTest {@Autowiredprivate Configuration configuration;@Testpublic void test() throws IOException, TemplateException {Template template = configuration.getTemplate("02-list.ftl");/*** 合成方法* 两个参数* 第一个参数:模型参数* 第二个参数:输出流*/template.process(getData(),new FileWriter("c:/list.html"));}private Map getData(){Map<String, Object> map = new HashMap();Student stu1 = new Student();stu1.setName("小强");stu1.setAge(18);stu1.setMoney(1000.86f);stu1.setBirthday(new Date());//小红对象模型数据Student stu2 = new Student();stu2.setName("小红");stu2.setMoney(200.1f);stu2.setAge(19);//将两个对象模型数据存放到List集合中List<Student> stus = new ArrayList<>();stus.add(stu1);stus.add(stu2);//向model中存放List集合数据map.put("stus",stus);//model.addAttribute("stus",stus);//------------------------------------//创建Map数据HashMap<String,Student> stuMap = new HashMap<>();stuMap.put("stu1",stu1);stuMap.put("stu2",stu2);// 3.1 向model中存放Map数据map.put("stuMap", stuMap);//model.addAttribute("stuMap", stuMap);//日期map.put("today",new Date());//model.addAttribute("today",new Date());//长数值map.put("point",323213123132312L);//model.addAttribute("point",323213123132312L);return map;}
}

 

 新版本MinlO用以下方式启动:

docker run -d \-p 9000:9000 \-p 9001:9001 \--name minio1 \-v /home/minio/data:/data \-e "MINIO_ROOT_USER=minio" \-e "MINIO_ROOT_PASSWORD=minio123" \minio/minio server /data --console-address ":9001"

假设我们的服务器地址为http://192.168.200.130:9000,我们在地址栏输入:http://http://192.168.200.130:9000/ 即可进入登录界面。

 Access Key为minio Secret_key 为minio123 进入系统后可以看到主界面

 点击右下角的“+”号 ,点击下面的图标,创建一个桶

 创建minio-demo,对应pom如下

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><parent><artifactId>heima-leadnews-test</artifactId><groupId>com.heima</groupId><version>1.0-SNAPSHOT</version></parent><modelVersion>4.0.0</modelVersion><artifactId>minio-demo</artifactId><properties><maven.compiler.source>8</maven.compiler.source><maven.compiler.target>8</maven.compiler.target></properties><dependencies><dependency><groupId>io.minio</groupId><artifactId>minio</artifactId><version>7.1.0</version></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId></dependency></dependencies></project>

引导类:

package com.heima.minio;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;@SpringBootApplication
public class MinIOApplication {public static void main(String[] args) {SpringApplication.run(MinIOApplication.class,args);}
}

创建测试类,上传html文件

package com.heima.minio.test;import io.minio.MinioClient;
import io.minio.PutObjectArgs;import java.io.FileInputStream;public class MinIOTest {public static void main(String[] args) {FileInputStream fileInputStream = null;try {fileInputStream =  new FileInputStream("D:\\list.html");;//文件读取的位置//1.创建minio链接客户端MinioClient minioClient = MinioClient.builder().credentials("minio", "minio123").endpoint("http://192.168.200.130:9000").build();//2.上传PutObjectArgs putObjectArgs = PutObjectArgs.builder().object("list.html")//文件名.contentType("text/html")//文件类型.bucket("leadnews")//桶名词  与minio创建的名词一致.stream(fileInputStream, fileInputStream.available(), -1) //文件流,-1上传所有的文件.build();minioClient.putObject(putObjectArgs);System.out.println("http://192.168.200.130:9000/leadnews/ak47.jpg");} catch (Exception ex) {ex.printStackTrace();}}}


 

@SpringBootTest(classes = ArticleApplication.class)
@RunWith(SpringRunner.class)
public class ArticleFreemarkerTest {@Autowiredprivate ApArticleContentMapper apArticleContentMapper;@Autowiredprivate ApArticleService apArticleService;@Autowiredprivate Configuration configuration;@Autowiredprivate FileStorageService fileStorageService;@Testpublic void createStaticUrlTest() throws Exception {//1.获取文章内容//已知文章的idApArticleContent apArticleContent = apArticleContentMapper.selectOne(Wrappers.<ApArticleContent>lambdaQuery().eq(ApArticleContent::getArticleId,1302862387124125698L));if(apArticleContent !=null && StringUtils.isNotBlank(apArticleContent.getContent())){//2.文章内容通过freemarker生成html文件Template template = configuration.getTemplate("article.ftl");//数据模型Map content = new HashMap();content.put("content", JSONArray.parseArray(apArticleContent.getContent()));StringWriter out = new StringWriter();//合成template.process(content,out);//3.把html文件上传到minio中InputStream in = new ByteArrayInputStream(out.toString().getBytes());String path = fileStorageService.uploadHtmlFile("",apArticleContent.getArticleId()+".html",in);//4.修改ap_article表,保存static_utl字段apArticleService.update(Wrappers.<ApArticle>lambdaUpdate().eq(ApArticle::getId,apArticleContent.getArticleId()).set(ApArticle::getStaticUrl,path));}}
}

网关,token解析为用户存入header 

Claims claimsBody = AppJwtUtil.getClaimsBody(token);//是否是过期int result = AppJwtUtil.verifyToken(claimsBody);if(result == 1 || result  == 2){response.setStatusCode(HttpStatus.UNAUTHORIZED);return response.setComplete();}//获取用户信息Object userId = claimsBody.get("id");//存入header中ServerHttpRequest serverHttpRequest = request.mutate().headers(httpHeaders -> {httpHeaders.add("userId",userId+"");}).build();//重置请求exchange.mutate().request(serverHttpRequest);

拦截器

public class WmTokenInterceptor implements HandlerInterceptor {/*** 得到header中的用户信息,并且存入到当前线程中* @param request* @param response* @param handler* @return* @throws Exception*/@Overridepublic boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {String userId = request.getHeader("userId");if(userId != null){WmUser wmUser = new WmUser();wmUser.setApUserId(Integer.valueOf(userId));//存入到当前线程中WmThreadLocalUtil.setUser(wmUser);}return true;}/*** 清理线程中的数据* @param request* @param response* @param handler* @param modelAndView* @throws Exception*/@Overridepublic void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {WmThreadLocalUtil.clear();}
}

注册拦截器

@Configuration
public class WebMvcConfig implements WebMvcConfigurer {@Overridepublic void addInterceptors(InterceptorRegistry registry) {registry.addInterceptor(new WmTokenInterceptor()).addPathPatterns("/**");}
}


 

 mybatis-plus已经集成了雪花算法,完成以下两步即可在项目中集成雪花算法

第一:在实体类中的id上加入如下配置,指定类型为id_worker

@TableId(value = "id",type = IdType.ID_WORKER)
private Long id;

第二:在application.yml文件中配置数据中心id和机器id

mybatis-plus:mapper-locations: classpath*:mapper/*.xml# 设置别名包扫描路径,通过该属性可以给包中的类注册别名type-aliases-package: com.heima.model.article.pojosglobal-config:datacenter-id: 1workerId: 1

feign基本使用

①:在heima-leadnews-feign-api中新增接口

先导入feign的依赖(在feign模块中导入)

<dependency><groupId>org.springframework.cloud</groupId><artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

定义文章端的接口(在feign模块中定义接口)

package com.heima.apis.article;
​
import com.heima.model.article.dtos.ArticleDto;
import com.heima.model.common.dtos.ResponseResult;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
​
import java.io.IOException;
​
​
@FeignClient(value = "leadnews-article")//服务提供者的服务名称
public interface IArticleClient {
​@PostMapping("/api/v1/article/save")//服务提供者的请求路径public ResponseResult saveArticle(@RequestBody ArticleDto dto) ;
}
@RestController
public class ArticleClient implements IArticleClient {@PostMapping("/api/v1/article/save")@Overridepublic ResponseResult saveArticle(@RequestBody ArticleDto dto) {return null;}
}

 在heima-leadnews-wemedia服务(服务消费者)中已经依赖了heima-leadnews-feign-apis(feign模块)工程,只需要在自媒体的引导类中开启feign的远程调用即可

<dependency><groupId>com.heima</groupId><artifactId>heima-leadnews-feign-api</artifactId>
</dependency>

注解为:@EnableFeignClients(basePackages = "com.heima.apis") 需要指向apis这个包

@SpringBootApplication
@EnableDiscoveryClient
@MapperScan("com.heima.wemedia.mapper")
@EnableFeignClients(basePackages = "com.heima.apis")
public class WemediaApplication {public static void main(String[] args) {SpringApplication.run(WemediaApplication.class,args);}@Beanpublic MybatisPlusInterceptor mybatisPlusInterceptor() {MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.MYSQL));return interceptor;}
}

 然后直接注入即可使用

@Autowired
private IArticleClient iArticleClient;

 ​​​​​​

实现步骤:

①:在heima-leadnews-feign-api编写降级逻辑

package com.heima.apis.article.fallback;
​
import com.heima.apis.article.IArticleClient;
import com.heima.model.article.dtos.ArticleDto;
import com.heima.model.common.dtos.ResponseResult;
import com.heima.model.common.enums.AppHttpCodeEnum;
import org.springframework.stereotype.Component;
​
/*** feign失败配置* @author itheima*/
@Component
public class IArticleClientFallback implements IArticleClient {@Overridepublic ResponseResult saveArticle(ArticleDto dto)  {return ResponseResult.errorResult(AppHttpCodeEnum.SERVER_ERROR,"获取数据失败");}
}

在自媒体微服务中添加类,扫描降级代码类的包

package com.heima.wemedia.config;
​
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
​
@Configuration
@ComponentScan("com.heima.apis.article.fallback")
public class InitConfig {
}

②:远程接口中指向降级代码

package com.heima.apis.article;
​
import com.heima.apis.article.fallback.IArticleClientFallback;
import com.heima.model.article.dtos.ArticleDto;
import com.heima.model.common.dtos.ResponseResult;
import org.springframework.cloud.openfeign.FeignClient;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
​
@FeignClient(value = "leadnews-article",fallback = IArticleClientFallback.class)
public interface IArticleClient {
​@PostMapping("/api/v1/article/save")public ResponseResult saveArticle(@RequestBody ArticleDto dto);
}

③:客户端开启降级heima-leadnews-wemedia

在wemedia的nacos配置中心里添加如下内容,开启服务降级,也可以指定服务响应的超时的时间

hystrix 可以在服务器端配置降级,也可以在客户端配置降级

feign:# 开启feign对hystrix熔断降级的支持hystrix:enabled: true# 修改调用超时时间client:config:default:connectTimeout: 2000readTimeout: 2000

 目前的降级逻辑并未生效,还需要在heima-wemedia中开启包的扫描,使得该降级逻辑生效

@Configuration
@ComponentScan("com.heima.apis.article.fallback")
public class InitConfig {
}

④:测试

在ApArticleServiceImpl类中saveArticle方法添加代码

try {Thread.sleep(3000);
} catch (InterruptedException e) {e.printStackTrace();
}

在自媒体端进行审核测试,会出现服务降级的现象。


/*** 自媒体文章审核* @param id 自媒体文章id*/@Override@Async //标明当前方法是一个异步方法public void autoScanWmNews(Integer id) {......}
/*** 自媒体发布,修改,保存草稿* @param dto* @return*/@Overridepublic ResponseResult submit(WmNewsDto dto) {//0.条件判断if(dto == null || dto.getContent() == null){return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);}//1.保存或修改文章WmNews wmNews = new WmNews();//属性拷贝 属性名称和类型相同才能拷贝BeanUtils.copyProperties(dto,wmNews);//封面图片 list ---> stringif(dto.getImages() != null && dto.getImages().size() > 0){//[1.jpg,2.jpg]-->1.jpg,2.jpgString imagStr = StringUtils.join(dto.getImages(),",");wmNews.setImages(imagStr);}//如果当前封面类型为自动 -1if(dto.getType().equals(WemediaConstants.WM_NEWS_TYPE_AUTO)){wmNews.setType(null);}saveOrUpdateWmNews(wmNews);//2.判断是否为草稿if(dto.getStatus().equals(WmNews.Status.NORMAL.getCode())){return ResponseResult.okResult(AppHttpCodeEnum.SUCCESS);}//3.不是草稿,保存文章内容图片与素材的关系//获取到文章内容中的图片信息List<String> materials = extractUrlInfo(dto.getContent());saveRelativeInfoForContent(materials,wmNews.getId());//4.不是草稿,保存文章封面图片与素材的关系,如果当前布局是自动,需要匹配封面图片saveRelativeInfoForCover(dto,wmNews,materials);//审核文章wmNewsAutoScanService.autoScanWmNews(wmNews.getId());return ResponseResult.okResult(AppHttpCodeEnum.SUCCESS);}
@SpringBootApplication
@EnableDiscoveryClient
@MapperScan("com.heima.wemedia.mapper")
@EnableFeignClients(basePackages = "com.heima.apis")
@EnableAsync //开启异步调用
public class WemediaApplication {public static void main(String[] args) {SpringApplication.run(WemediaApplication.class,args);}@Beanpublic MybatisPlusInterceptor mybatisPlusInterceptor() {MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.MYSQL));return interceptor;}
}

 

 

package com.heima.utils.common;import java.util.*;public class SensitiveWordUtil {public static Map<String, Object> dictionaryMap = new HashMap<>();/*** 生成关键词字典库* @param words* @return*/public static void initMap(Collection<String> words) {if (words == null) {System.out.println("敏感词列表不能为空");return ;}// map初始长度words.size(),整个字典库的入口字数(小于words.size(),因为不同的词可能会有相同的首字)Map<String, Object> map = new HashMap<>(words.size());// 遍历过程中当前层次的数据Map<String, Object> curMap = null;Iterator<String> iterator = words.iterator();while (iterator.hasNext()) {String word = iterator.next();curMap = map;int len = word.length();for (int i =0; i < len; i++) {// 遍历每个词的字String key = String.valueOf(word.charAt(i));// 当前字在当前层是否存在, 不存在则新建, 当前层数据指向下一个节点, 继续判断是否存在数据Map<String, Object> wordMap = (Map<String, Object>) curMap.get(key);if (wordMap == null) {// 每个节点存在两个数据: 下一个节点和isEnd(是否结束标志)wordMap = new HashMap<>(2);wordMap.put("isEnd", "0");curMap.put(key, wordMap);}curMap = wordMap;// 如果当前字是词的最后一个字,则将isEnd标志置1if (i == len -1) {curMap.put("isEnd", "1");}}}dictionaryMap = map;}/*** 搜索文本中某个文字是否匹配关键词* @param text* @param beginIndex* @return*/private static int checkWord(String text, int beginIndex) {if (dictionaryMap == null) {throw new RuntimeException("字典不能为空");}boolean isEnd = false;int wordLength = 0;Map<String, Object> curMap = dictionaryMap;int len = text.length();// 从文本的第beginIndex开始匹配for (int i = beginIndex; i < len; i++) {String key = String.valueOf(text.charAt(i));// 获取当前key的下一个节点curMap = (Map<String, Object>) curMap.get(key);if (curMap == null) {break;} else {wordLength ++;if ("1".equals(curMap.get("isEnd"))) {isEnd = true;}}}if (!isEnd) {wordLength = 0;}return wordLength;}/*** 获取匹配的关键词和命中次数* @param text* @return*/public static Map<String, Integer> matchWords(String text) {Map<String, Integer> wordMap = new HashMap<>();int len = text.length();for (int i = 0; i < len; i++) {int wordLength = checkWord(text, i);if (wordLength > 0) {String word = text.substring(i, i + wordLength);// 添加关键词匹配次数if (wordMap.containsKey(word)) {wordMap.put(word, wordMap.get(word) + 1);} else {wordMap.put(word, 1);}i += wordLength - 1;}}return wordMap;}public static void main(String[] args) {List<String> list = new ArrayList<>();list.add("法轮");list.add("法轮功");list.add("冰毒");//初始化敏感词库initMap(list);String content="我是一个好人,并不会卖冰毒,也不操练法轮功,我真的不卖冰毒";//文本中查找是否存在敏感词Map<String, Integer> map = matchWords(content);System.out.println(map);}
}

 

Tess4j案例

①:创建项目导入tess4j对应的依赖

<dependency><groupId>net.sourceforge.tess4j</groupId><artifactId>tess4j</artifactId><version>4.1.1</version>
</dependency>

②:导入中文字体库, 把资料中的tessdata文件夹拷贝到自己的工作空间下

 ③:编写测试类进行测试

package com.heima.tess4j;import net.sourceforge.tess4j.ITesseract;
import net.sourceforge.tess4j.Tesseract;import java.io.File;public class Application {/*** 识别图片中的文字* @param args*/public static void main(String[] args) {try {//创建Tesseract对象ITesseract tesseract = new Tesseract();//设置字体库路径tesseract.setDatapath("C:\\Users\\83825\\Desktop");//设置语言 简体中文tesseract.setLanguage("chi_sim");//获取本地图片File file = new File("C:\\Users\\83825\\Desktop\\test6.png");//执行ocr识别String result = tesseract.doOCR(file);//替换回车和tal键  使结果为一行result = result.replaceAll("\\r|\\n","-").replaceAll(" ","");System.out.println("识别的结果为:"+result);} catch (Exception e) {e.printStackTrace();}}
}

注意要用全英文路径


 

 

 

 

 

 

 

 


package com.heima.model.schedule.pojos;import com.baomidou.mybatisplus.annotation.*;
import lombok.Data;import java.io.Serializable;
import java.util.Date;/*** <p>* * </p>** @author itheima*/
@Data
@TableName("taskinfo_logs")
public class TaskinfoLogs implements Serializable {private static final long serialVersionUID = 1L;/*** 任务id*/@TableId(type = IdType.ID_WORKER)private Long taskId;/*** 执行时间*/@TableField("execute_time")private Date executeTime;/*** 参数*/@TableField("parameters")private byte[] parameters;/*** 优先级*/@TableField("priority")private Integer priority;/*** 任务类型*/@TableField("task_type")private Integer taskType;/*** 版本号,用乐观锁*/@Versionprivate Integer version;/*** 状态 0=int 1=EXECUTED 2=CANCELLED*/@TableField("status")private Integer status;}

@Service
@Transactional
@Slf4j
public class TaskServiceImpl implements TaskService {@Overridepublic long addTask(Task task) {//1.添加任务到数据库中boolean success = addTaskToDb(task);if(success){//2.添加任务到redisaddTaskToCache(task);}return task.getTaskId();}@Autowiredprivate CacheService cacheService;/*** 把任务添加到redis中* @param task*/private void addTaskToCache(Task task) {String key = task.getTaskType() + "_" + task.getPriority();//获取5分钟之后的时间,毫秒值Calendar calendar = Calendar.getInstance();calendar.add(Calendar.MINUTE,5);long nextScheduleTime = calendar.getTimeInMillis();if(task.getExecuteTime() <= System.currentTimeMillis()){//2.1 如果任务的执行时间小于等于当前时间,存入ListcacheService.lLeftPush(ScheduleConstants.TOPIC+key, JSON.toJSONString(task));}else if(task.getExecuteTime() <= nextScheduleTime){//2.2 如果任意的执行时间大于当前时间 && 小于等于预设时间(未来5分钟)存入zset中cacheService.zAdd(ScheduleConstants.FUTURE+key,JSON.toJSONString(task),task.getExecuteTime());}}@Autowiredprivate TaskinfoMapper taskinfoMapper;@Autowiredprivate TaskinfoLogsMapper taskinfoLogsMapper;/*** 添加任务到数据库中* @param task* @return*/private boolean addTaskToDb(Task task) {boolean flag = false;try{//保存任务表Taskinfo taskinfo = new Taskinfo();BeanUtils.copyProperties(task,taskinfo);taskinfo.setExecuteTime(new Date(task.getExecuteTime()));taskinfoMapper.insert(taskinfo);//设置taskIDtask.setTaskId(taskinfo.getTaskId());//保存任务日志数据TaskinfoLogs taskinfoLogs = new TaskinfoLogs();BeanUtils.copyProperties(taskinfo,taskinfoLogs);taskinfoLogs.setVersion(1);taskinfoLogs.setStatus(ScheduleConstants.SCHEDULED);taskinfoLogsMapper.insert(taskinfoLogs);flag = true;}catch (Exception e){e.printStackTrace();;}return flag;}
}

/*** 取消任务* @param taskId* @return*/@Overridepublic boolean cancelTask(long taskId) {boolean flag = false;//删除任务,更新任务日志Task task = updateDb(taskId,ScheduleConstants.CANCELLED);//删除redis的数据if (task != null){removeTaskFromCache(task);flag = true;}return flag;}/*** 删除redis中的数据* @param task*/private void removeTaskFromCache(Task task) {String key = task.getTaskType() + "_" + task.getPriority();if(task.getExecuteTime() <= System.currentTimeMillis()){cacheService.lRemove(ScheduleConstants.TOPIC + key,0,JSON.toJSONString(task));}else{cacheService.zRemove(ScheduleConstants.FUTURE + key,JSON.toJSONString(task));}}/*** 删除任务,更新任务日志* @param taskId* @param status* @return*/private Task updateDb(long taskId, int status) {Task task = new Task();try{//删除任务taskinfoMapper.deleteById(taskId);//更新任务日志TaskinfoLogs taskinfoLogs = taskinfoLogsMapper.selectById(taskId);taskinfoLogs.setStatus(status);taskinfoLogsMapper.updateById(taskinfoLogs);BeanUtils.copyProperties(taskinfoLogs,task);task.setExecuteTime(taskinfoLogs.getExecuteTime().getTime());}catch (Exception e){log.error("task cancel exception taskId={}",taskId);}return task;}

/*** 按照类型和优先级拉取任务* @param type* @param priority* @return*/@Overridepublic Task poil(int type, int priority) {Task task = new Task();try{String key = type + "_" + priority;//从redis中拉取数据 popString task_json = cacheService.lRightPop(ScheduleConstants.TOPIC + key);if(StringUtils.isNotBlank(task_json)){task = JSON.parseObject(task_json, Task.class);//修改数据库信息updateDb(task.getTaskId(),ScheduleConstants.EXECUTED);}}catch (Exception e){e.printStackTrace();log.error("poll task exception");}return task;}

 

@Testpublic void testKeys(){Set<String> keys = cacheService.keys("future_*");System.out.println(keys);Set<String> scan = cacheService.scan("future_*");System.out.println(scan);}

 

//耗时6151@Testpublic  void testPiple1(){long start =System.currentTimeMillis();for (int i = 0; i <10000 ; i++) {Task task = new Task();task.setTaskType(1001);task.setPriority(1);task.setExecuteTime(new Date().getTime());cacheService.lLeftPush("1001_1", JSON.toJSONString(task));}System.out.println("耗时"+(System.currentTimeMillis()- start));}//1472毫秒@Testpublic void testPiple2(){long start  = System.currentTimeMillis();//使用管道技术List<Object> objectList = cacheService.getstringRedisTemplate().executePipelined(new RedisCallback<Object>() {@Nullable@Overridepublic Object doInRedis(RedisConnection redisConnection) throws DataAccessException {for (int i = 0; i <10000 ; i++) {Task task = new Task();task.setTaskType(1001);task.setPriority(1);task.setExecuteTime(new Date().getTime());redisConnection.lPush("1001_1".getBytes(), JSON.toJSONString(task).getBytes());}return null;}});System.out.println("使用管道技术执行10000次自增操作共耗时:"+(System.currentTimeMillis()-start)+"毫秒");}

 

/*** 未来数据定时刷新*/@Scheduled(cron = "0 */1 * * * ?")public void refresh(){log.info("未来数据定时刷新---定时任务");//获取所有未来数据的集合keySet<String> futureKeys = cacheService.scan(ScheduleConstants.FUTURE + "*");for (String futureKey : futureKeys) {//future_100_50//获取当前数据的key topicString topicKey = ScheduleConstants.TOPIC+futureKey.split(ScheduleConstants.FUTURE)[1];//按照key和分值查询符合条件的数据Set<String> tasks = cacheService.zRangeByScore(futureKey, 0, System.currentTimeMillis());//同步数据if(!tasks.isEmpty()){cacheService.refreshWithPipeline(futureKey,topicKey,tasks);log.info("成功的将"+futureKey+"刷新到了"+topicKey);}}}
public List<Object> refreshWithPipeline(String future_key,String topic_key,Collection<String> values){List<Object> objects = stringRedisTemplate.executePipelined(new RedisCallback<Object>() {@Nullable@Overridepublic Object doInRedis(RedisConnection redisConnection) throws DataAccessException {StringRedisConnection stringRedisConnection = (StringRedisConnection)redisConnection;String[] strings = values.toArray(new String[values.size()]);stringRedisConnection.rPush(topic_key,strings);stringRedisConnection.zRem(future_key,strings);return null;}});return objects;}
@SpringBootApplication
@MapperScan("com.heima.schedule.mapper")
@EnableScheduling //开启定时任务注解
public class ScheduleApplication {public static void main(String[] args) {SpringApplication.run(ScheduleApplication.class,args);}/*** mybatis-plus乐观锁支持* @return*/@Beanpublic MybatisPlusInterceptor optimisticLockerInterceptor(){MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();interceptor.addInnerInterceptor(new OptimisticLockerInnerInterceptor());return interceptor;}
}

/*** 加锁** @param name* @param expire 过期的时间* @return*/public String tryLock(String name, long expire) {name = name + "_lock";String token = UUID.randomUUID().toString();RedisConnectionFactory factory = stringRedisTemplate.getConnectionFactory();RedisConnection conn = factory.getConnection();try {//参考redis命令://set key value [EX seconds] [PX milliseconds] [NX|XX]Boolean result = conn.set(name.getBytes(),token.getBytes(),Expiration.from(expire, TimeUnit.MILLISECONDS),RedisStringCommands.SetOption.SET_IF_ABSENT //NX);if (result != null && result)return token;} finally {RedisConnectionUtils.releaseConnection(conn, factory,false);}return null;}
/*** 未来数据定时刷新*/@Scheduled(cron = "0 */1 * * * ?")public void refresh(){String token = cacheService.tryLock("FUTRUE_TASK_SYNC",1000 * 30);if(StringUtils.isNotBlank(token)){log.info("未来数据定时刷新---定时任务");//获取所有未来数据的集合keySet<String> futureKeys = cacheService.scan(ScheduleConstants.FUTURE + "*");for (String futureKey : futureKeys) {//future_100_50//获取当前数据的key topicString topicKey = ScheduleConstants.TOPIC+futureKey.split(ScheduleConstants.FUTURE)[1];//按照key和分值查询符合条件的数据Set<String> tasks = cacheService.zRangeByScore(futureKey, 0, System.currentTimeMillis());//同步数据if(!tasks.isEmpty()){cacheService.refreshWithPipeline(futureKey,topicKey,tasks);log.info("成功的将"+futureKey+"刷新到了"+topicKey);}}}}

/*** 数据库任务定时同步到redis中*/@PostConstruct@Scheduled(cron = "0 */5 * * * ?")public void reloadData(){//清理缓存中的数据 list zsetclearCache();//查询小于未来5分钟的所有任务Calendar calendar = Calendar.getInstance();calendar.add(Calendar.MINUTE,5);List<Taskinfo> taskinfoList = taskinfoMapper.selectList(Wrappers.<Taskinfo>lambdaQuery().lt(Taskinfo::getExecuteTime,calendar.getTime()));//新增任务到redisif(taskinfoList != null&&taskinfoList.size() > 0) {for (Taskinfo taskinfo : taskinfoList) {Task task = new Task();BeanUtils.copyProperties(taskinfo, task);task.setExecuteTime(taskinfo.getExecuteTime().getTime());addTaskToCache(task);}}log.info("数据库的任务同步到了redis");}/*** 清理缓存中的数据*/public void clearCache(){Set<String> topicKeys = cacheService.scan(ScheduleConstants.TOPIC + "*");Set<String> futureKeys = cacheService.scan(ScheduleConstants.FUTURE + "*");cacheService.delete(topicKeys);cacheService.delete(futureKeys);}

package com.heima.utils.common;import com.heima.model.wemedia.pojos.WmNews;
import io.protostuff.LinkedBuffer;
import io.protostuff.ProtostuffIOUtil;
import io.protostuff.Schema;
import io.protostuff.runtime.RuntimeSchema;public class ProtostuffUtil {/*** 序列化* @param t* @param <T>* @return*/public static <T> byte[] serialize(T t){Schema schema = RuntimeSchema.getSchema(t.getClass());return ProtostuffIOUtil.toByteArray(t,schema,LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE));}/*** 反序列化* @param bytes* @param c* @param <T>* @return*/public static <T> T deserialize(byte []bytes,Class<T> c) {T t = null;try {t = c.newInstance();Schema schema = RuntimeSchema.getSchema(t.getClass());ProtostuffIOUtil.mergeFrom(bytes,t,schema);} catch (InstantiationException e) {e.printStackTrace();} catch (IllegalAccessException e) {e.printStackTrace();}return t;}/*** jdk序列化与protostuff序列化对比* @param args*/public static void main(String[] args) {long start =System.currentTimeMillis();for (int i = 0; i <1000000 ; i++) {WmNews wmNews =new WmNews();JdkSerializeUtil.serialize(wmNews);}System.out.println(" jdk 花费 "+(System.currentTimeMillis()-start));start =System.currentTimeMillis();for (int i = 0; i <1000000 ; i++) {WmNews wmNews =new WmNews();ProtostuffUtil.serialize(wmNews);}System.out.println(" protostuff 花费 "+(System.currentTimeMillis()-start));}}
WmNewsServiceImpl
@Overridepublic ResponseResult submit(WmNewsDto dto) {......//审核文章//wmNewsAutoScanService.autoScanWmNews(wmNews.getId());wmNewsTaskService.addNewsToTask(wmNews.getId(),wmNews.getPublishTime());return ResponseResult.okResult(AppHttpCodeEnum.SUCCESS);}
WmNewsTaskServiceImpl
@Service
@Slf4j
public class WmNewsTaskServiceImpl implements WmNewsTaskService {@Autowiredprivate IScheduleClient scheduleClient;/*** 添加任务到延迟队列中* @param id 文章id* @param publishTime 发布的时间 可以作为任务的执行时间*/@Override@Asyncpublic void addNewsToTask(Integer id, Date publishTime) {log.info("添加任务到延迟服务中------begin");Task task = new Task();task.setExecuteTime(publishTime.getTime());task.setTaskType(TaskTypeEnum.NEWS_SCAN_TIME.getTaskType());task.setPriority(TaskTypeEnum.NEWS_SCAN_TIME.getPriority());WmNews wmNews = new WmNews();wmNews.setId(id);task.setParameters(ProtostuffUtil.serialize(wmNews));scheduleClient.addTask(task);log.info("添加任务到延迟服务中------end");}@Autowiredprivate WmNewsAutoScanService wmNewsAutoScanService;/*** 消费任务,审核文章*/@Scheduled(fixedRate = 1000)@Overridepublic void scanNewsByTask() {log.info("消费任务,审核文章");ResponseResult responseResult = scheduleClient.poll(TaskTypeEnum.NEWS_SCAN_TIME.getTaskType(), TaskTypeEnum.NEWS_SCAN_TIME.getPriority());if(responseResult.getCode().equals(200) && responseResult.getData() != null){Task task = JSON.parseObject(JSON.toJSONString(responseResult.getData()),Task.class);WmNews wmNews = ProtostuffUtil.deserialize(task.getParameters(),WmNews.class);wmNewsAutoScanService.autoScanWmNews(wmNews.getId());}}}

 遇到问题:

java: Annotation processing is not supported for module cycles. Please ensure that all modules from cycle [heima-leadnews-schedule,heima-leadnews-feign-api] are excluded from annotation processing

原因:heima-leadnews-feign-api需要heima-leadnews-schedule里面的一个task类,heima-leadnews-schedule需要实现heima-leadnews-feign-api的接口,从而导致形成了循环依赖,即heima-leadnews-feign-api依赖heima-leadnews-schedule,heima-leadnews-schedule也依赖了heima-leadnews-feign-api。

解决:将heima-leadnews-schedule里面的这个task类抽出来放到第三个专门管理类的模块,然后两个模块都去引用它,引用一个第三方模块来专门管理类可以避免循环依赖。


 

kafka安装配置

Kafka对于zookeeper是强依赖,保存kafka相关的节点数据,所以安装Kafka之前必须先安装zookeeper

  • Docker安装zookeeper

下载镜像:

docker pull zookeeper:3.4.14

创建容器

docker run -d --name zookeeper -p 2181:2181 zookeeper:3.4.14
  • Docker安装kafka

下载镜像:

docker pull wurstmeister/kafka:2.12-2.3.1

创建容器

docker run -d --name kafka \
--env KAFKA_ADVERTISED_HOST_NAME=192.168.136.152 \
--env KAFKA_ZOOKEEPER_CONNECT=192.168.136.152:2181 \
--env KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.136.152:9092 \
--env KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
--env KAFKA_HEAP_OPTS="-Xmx256M -Xms256M" \
--net=host wurstmeister/kafka:2.12-2.3.1


 

消息生产者

package com.heima.kafka.sample;import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;import java.util.Properties;/*** 生产者*/
public class ProducerQuickStart {public static void main(String[] args) {//1.kafka连接配置信息Properties properties = new Properties();//kafka的连接地址properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.136.152:9092");//key和value的序列化properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");//2.创建生产者对象KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);//3.发送消息/*** 第一个参数:topic* 第二个参数:消息的key* 第三个参数:消息的value*/ProducerRecord<String,String> kvProducerRecord = new ProducerRecord<String,String>("topic-first","key-001","hello kafka");producer.send(kvProducerRecord);System.out.println("消息发送成功");//4.关闭消息通道 必须要关闭,否则消息发送不成功producer.close();}
}

消息消费者

package com.heima.kafka.sample;import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;import java.time.Duration;
import java.util.Collections;
import java.util.Properties;/*** 消费者*/
public class ConsumerQuickStart {public static void main(String[] args) {//1.kafka的配置信息Properties properties = new Properties();//连接地址properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.136.152:9092");//反序列化的key和valueproperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");//设置消费者组properties.put(ConsumerConfig.GROUP_ID_CONFIG,"gruop1");//2.创建消费者对象KafkaConsumer<String,String> consumer = new KafkaConsumer<String,String>(properties);//3.订阅主题consumer.subscribe(Collections.singletonList("topic-first"));//4.拉取消息 每一秒钟拉取一次while(true){ //模拟正在监听状态ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(1000));for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {System.out.println(consumerRecord.key());System.out.println(consumerRecord.value());}}}
}

遇到问题:如果没有传输成功,需要关闭服务器的防火墙,将kafka的访问端口暴露出来

 

 

 

 

 

 

package com.heima.kafka.sample;import org.apache.kafka.clients.producer.*;import java.util.Properties;
import java.util.concurrent.ExecutionException;/*** 生产者*/
public class ProducerQuickStart {public static void main(String[] args) throws ExecutionException, InterruptedException {//1.kafka连接配置信息Properties properties = new Properties();//kafka的连接地址properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.136.152:9092");//key和value的序列化properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");//ack配置,消息确认机制properties.put(ProducerConfig.ACKS_CONFIG,"all");//设置重试次数properties.put(ProducerConfig.RETRIES_CONFIG,10);//消息压缩properties.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,"lz4");//2.创建生产者对象KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);//3.发送消息/*** 第一个参数:topic* 第二个参数:消息的key* 第三个参数:消息的value*/ProducerRecord<String,String> kvProducerRecord = new ProducerRecord<String,String>("topic-first","hello kafka");//同步发送消息//RecordMetadata recordMetadata = producer.send(kvProducerRecord).get();//System.out.println(recordMetadata.offset());//获取偏移量//异步发送消息producer.send(kvProducerRecord, new Callback() {@Overridepublic void onCompletion(RecordMetadata recordMetadata, Exception e) {if (e!=null){System.out.println("记录异常信息到日志表中");}System.out.println(recordMetadata.offset());}});System.out.println("消息发送成功");//4.关闭消息通道 必须要关闭,否则消息发送不成功producer.close();}
}

 

 

 

package com.heima.kafka.sample;import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;import java.time.Duration;
import java.util.Collections;
import java.util.Map;
import java.util.Properties;/*** 消费者*/
public class ConsumerQuickStart {public static void main(String[] args) {//1.kafka的配置信息Properties properties = new Properties();//连接地址properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.136.152:9092");//反序列化的key和valueproperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");//设置消费者组properties.put(ConsumerConfig.GROUP_ID_CONFIG,"gruop1");//手动提交偏移量properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"false");//2.创建消费者对象KafkaConsumer<String,String> consumer = new KafkaConsumer<String,String>(properties);//3.订阅主题consumer.subscribe(Collections.singletonList("topic-first"));//4.拉取消息 每一秒钟拉取一次//同步提交和异步提交偏移量try{while(true) { //模拟正在监听状态ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(1000));for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {System.out.println(consumerRecord.key());System.out.println(consumerRecord.value());System.out.println(consumerRecord.offset());System.out.println(consumerRecord.partition());}//异步提交偏移量consumer.commitAsync();}}catch (Exception e){e.printStackTrace();System.out.println("记录错误的信息:"+e);}finally {//同步consumer.commitSync();}//while(true){ //模拟正在监听状态//    ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(1000));//    for (ConsumerRecord<String, String> consumerRecord : consumerRecords) {//        System.out.println(consumerRecord.key());//        System.out.println(consumerRecord.value());//        System.out.println(consumerRecord.offset());//        //System.out.println(consumerRecord.partition());//        //try{//        //    //同步提交偏移量//        //    consumer.commitSync();//        //}catch (CommitFailedException e){//        //    System.out.println("记录提交失败的异常:"+ e);//        //}//        //异步提交当前最新的偏移量//        //consumer.commitAsync(new OffsetCommitCallback() {//        //    @Override//        //    public void onComplete(Map<TopicPartition, OffsetAndMetadata> map, Exception e) {//        //        if(e!=null){//        //            System.out.println("记录错误的提交偏移量:"+map+",异常信息"+e);//        //        }//        //    }//        //});//    }//}}
}

 


搭建ElasticSearch环境

拉取镜像

docker pull elasticsearch:7.4.0

创建容器

docker run -id --name elasticsearch -d --restart=always -p 9200:9200 -p 9300:9300 -v /usr/share/elasticsearch/plugins:/usr/share/elasticsearch/plugins -e "discovery.type=single-node" elasticsearch:7.4.0

配置中文分词器 ik

因为在创建elasticsearch容器的时候,映射了目录,所以可以在宿主机上进行配置ik中文分词器

在去选择ik分词器的时候,需要与elasticsearch的版本好对应上

把资料中的elasticsearch-analysis-ik-7.4.0.zip上传到服务器上,放到对应目录(plugins)解压

#切换目录
cd /usr/share/elasticsearch/plugins
#新建目录
mkdir analysis-ik
cd analysis-ik
#root根目录中拷贝文件
mv elasticsearch-analysis-ik-7.4.0.zip /usr/share/elasticsearch/plugins/analysis-ik
#解压文件
cd /usr/share/elasticsearch/plugins/analysis-ik
unzip elasticsearch-analysis-ik-7.4.0.zip

2.4) 使用postman测试


创建索引和映射

使用postman添加映射

put请求 : http://192.168.200.152:9200/app_info_article

{"mappings":{"properties":{"id":{"type":"long"},"publishTime":{"type":"date"},"layout":{"type":"integer"},"images":{"type":"keyword","index": false},"staticUrl":{"type":"keyword","index": false},"authorId": {"type": "long"},"authorName": {"type": "text"},"title":{"type":"text","analyzer":"ik_smart"},"content":{"type":"text","analyzer":"ik_smart"}}}
}

 GET请求查询映射:http://192.168.200.130:9200/app_info_article

DELETE请求,删除索引及映射:http://192.168.200.130:9200/app_info_article

GET请求,查询所有文档:http://192.168.200.130:9200/app_info_article/_search

搭建搜索微服务

 

在heima-leadnews-service的pom中添加依赖

<!--elasticsearch-->
<dependency><groupId>org.elasticsearch.client</groupId><artifactId>elasticsearch-rest-high-level-client</artifactId><version>7.4.0</version>
</dependency>
<dependency><groupId>org.elasticsearch.client</groupId><artifactId>elasticsearch-rest-client</artifactId><version>7.4.0</version>
</dependency>
<dependency><groupId>org.elasticsearch</groupId><artifactId>elasticsearch</artifactId><version>7.4.0</version>
</dependency>

(3)nacos配置中心leadnews-search

spring:autoconfigure:exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
elasticsearch:host: 192.168.136.152port: 9200
@SpringBootTest
@RunWith(SpringRunner.class)
public class ApArticleTest {@Autowiredprivate ApArticleMapper apArticleMapper;@Autowiredprivate RestHighLevelClient restHighLevelClient;/*** 注意:数据量的导入,如果数据量过大,需要分页导入* @throws Exception*/@Testpublic void init() throws Exception {//1.查询所有符合条件的文章List<SearchArticleVo> searchArticleVos = apArticleMapper.loadArticleList();//2.批量导入到es索引库BulkRequest bulkRequest = new BulkRequest("app_info_article");for (SearchArticleVo searchArticleVo : searchArticleVos) {IndexRequest indexRequest = new IndexRequest().id(searchArticleVo.getId().toString()).source(JSON.toJSONString(searchArticleVo), XContentType.JSON);//批量添加对象bulkRequest.add(indexRequest);}BulkResponse response = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);System.out.println("插入结果:" + response.status());}
}

ArticleSearchServiceImpl
@Service
@Slf4j
public class ArticleSearchServiceImpl implements ArticleSearchService {@Autowiredprivate RestHighLevelClient restHighLevelClient;@Overridepublic ResponseResult search(UserSearchDto dto) throws IOException {//1.检查参数if(dto == null || StringUtils.isBlank(dto.getSearchWords())){return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);}//2.构建查询执行查询SearchRequest searchRequest = new SearchRequest("app_info_article");SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();//布尔查询(条件不只有一个)BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();//关键词的分词之后查询QueryStringQueryBuilder queryStringQueryBuilder = QueryBuilders.queryStringQuery(dto.getSearchWords()).field("title").field("content").defaultOperator(Operator.OR);boolQueryBuilder.must(queryStringQueryBuilder);//查询小于mindate的数据if(dto.getMinBehotTime()!=null){RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("publishTime").lt(dto.getMinBehotTime().getTime());boolQueryBuilder.filter(rangeQueryBuilder);}//分页查询searchSourceBuilder.from(0);searchSourceBuilder.size(dto.getPageSize());//按照发布时间倒序查询searchSourceBuilder.sort("publishTime", SortOrder.DESC);//设置高亮 titleHighlightBuilder highlightBuilder = new HighlightBuilder();highlightBuilder.field("title");highlightBuilder.preTags("<font style='color: red; font-size: inherit;'>");highlightBuilder.postTags("</font>");searchSourceBuilder.highlighter(highlightBuilder);searchSourceBuilder.query(boolQueryBuilder);searchRequest.source(searchSourceBuilder);SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);//3.结果封装返回List<Map> list = new ArrayList<>();SearchHit[] hits = searchResponse.getHits().getHits();for (SearchHit hit : hits) {String json = hit.getSourceAsString();Map map = JSON.parseObject(json, Map.class);//处理高亮if(hit.getHighlightFields() != null && hit.getHighlightFields().size() > 0){Text[] titles = hit.getHighlightFields().get("title").getFragments();String title = StringUtils.join(titles);//高亮标题map.put("h_title",title);}else{//原始标题map.put("h_title",map.get("title"));}list.add(map);}return ResponseResult.okResult(list);}
}


 

MongoDB安装及集成

4.3.1)安装MongoDB

拉取镜像

docker pull mongo

创建容器

docker run -di --name mongo-service --restart=always -p 27017:27017 -v ~/data/mongodata:/data mongo

4.3.2)导入资料中的mongo-demo项目到heima-leadnews-test中

其中有三项配置比较关键:

第一:mongo依赖

<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

第二:mongo配置

server:port: 9998
spring:data:mongodb:host: 192.168.200.130port: 27017database: leadnews-history

第三:映射

package com.itheima.mongo.pojo;
​
import lombok.Data;
import org.springframework.data.mongodb.core.mapping.Document;
​
import java.io.Serializable;
import java.util.Date;
​
/*** <p>* 联想词表* </p>** @author itheima*/
@Data
@Document("ap_associate_words")
public class ApAssociateWords implements Serializable {
​private static final long serialVersionUID = 1L;
​private String id;
​/*** 联想词*/private String associateWords;
​/*** 创建时间*/private Date createdTime;
​
}

 

 

 


@Async的作用就是异步处理任务
1.在方法上添加@Async,表示此方法是异步方法

2.在类上添加@Async,表示类中的所有方法都是异步方法

3.使用此注解的类,必须是Spring管理的类
4.需要在启动类或配置类中加入@EnableAsync注解,@Async才会生效; 



 

 

 

 

 

 

 

 

 创建xxljob-demo项目,导入依赖

<dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency>
​<!--xxl-job--><dependency><groupId>com.xuxueli</groupId><artifactId>xxl-job-core</artifactId><version>2.3.0</version></dependency>
</dependencies>

application.yml配置

server:port: 8881
​
​
xxl:job:admin:addresses: http://192.168.200.130:8888/xxl-job-adminexecutor:appname: xxl-job-executor-sampleport: 9999
 

 

 

 

 

 

 通过业务取模的方式将多个业务分别交给不同的分片去执行


 

 

 


 

 

 

 

 

 

 

 引入依赖

在之前的kafka-demo工程的pom文件中引入

<dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-streams</artifactId><exclusions><exclusion><artifactId>connect-json</artifactId><groupId>org.apache.kafka</groupId></exclusion><exclusion><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId></exclusion></exclusions>
</dependency>
package com.heima.kafka.sample;import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.TimeWindows;
import org.apache.kafka.streams.kstream.ValueMapper;import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;/*** 流式处理*/
public class KafkaStreamQuickStart {public static void main(String[] args) {//kafka配置Properties prop = new Properties();prop.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.136.152:9092");prop.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());prop.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG,Serdes.String().getClass());prop.put(StreamsConfig.APPLICATION_ID_CONFIG,"streams-sample");//stream构建器StreamsBuilder streamsBuilder = new StreamsBuilder();//流式计算streamProcessor(streamsBuilder);//创建kafkaStream对象KafkaStreams kafkaStreams = new KafkaStreams(streamsBuilder.build(),prop);//开启kafka流式计算kafkaStreams.start();}/*** 流式计算* 消息的内容:hello kafka* @param streamsBuilder*/private static void streamProcessor(StreamsBuilder streamsBuilder) {//创建kstream对象,同时指定从哪个topic中接收消息KStream<String,String> stream = streamsBuilder.stream("itcast-topic-input");//处理消息的valuestream.flatMapValues(new ValueMapper<String, Iterable<String>>() {@Overridepublic Iterable<String> apply(String value) {return Arrays.asList(value.split(" "));}})      //按照value进行聚合处理.groupBy((key,value)->value)//聚合计算时间间隔.windowedBy(TimeWindows.of(Duration.ofSeconds(10)))//聚合查询:求单词总个数.count()//转成KStream.toStream()//处理后结果key和value转成string.map((key,value)->{System.out.println("key:"+key+",value:"+value);return new KeyValue<>(key.key().toString(),value.toString());})//发送消息.to("itcast-topic-out");}
}

 

 


遇到问题:Kafka的consumer,producer或kafkaStream不起作用

原因:consumer,producer或kafkaStream虽然启动了但是没有注册到kafka。

解决方案:可能需要在服务器中重启甚至重新配置zookeeper和kafka,然后查看kafka的日志,启动consumer,producer或kafkaStream是否会实时打印出对应的注册信息。


heima-leadnews-behavior->ApLikesBehaviorServiceImpl

package com.heima.behavior.service.impl;import com.alibaba.fastjson.JSON;
import com.heima.behavior.service.ApLikesBehaviorService;
import com.heima.common.constants.BehaviorConstants;
import com.heima.common.constants.HotArticleConstants;
import com.heima.common.redis.CacheService;
import com.heima.model.behavior.dtos.LikesBehaviorDto;
import com.heima.model.common.dtos.ResponseResult;
import com.heima.model.common.enums.AppHttpCodeEnum;
import com.heima.model.mess.UpdateArticleMess;
import com.heima.model.user.pojos.ApUser;
import com.heima.utils.thread.AppThreadLocalUtil;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;@Service
@Transactional
@Slf4j
public class ApLikesBehaviorServiceImpl implements ApLikesBehaviorService {@Autowiredprivate CacheService cacheService;@Autowiredprivate KafkaTemplate<String,String> kafkaTemplate;@Overridepublic ResponseResult like(LikesBehaviorDto dto) {//1.检查参数if (dto == null || dto.getArticleId() == null || checkParam(dto)) {return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);}//2.是否登录ApUser user = AppThreadLocalUtil.getUser();if (user == null) {return ResponseResult.errorResult(AppHttpCodeEnum.NEED_LOGIN);}UpdateArticleMess mess = new UpdateArticleMess();mess.setArticleId(dto.getArticleId());mess.setType(UpdateArticleMess.UpdateArticleType.LIKES);//3.点赞  保存数据if (dto.getOperation() == 0) {Object obj = cacheService.hGet(BehaviorConstants.LIKE_BEHAVIOR + dto.getArticleId().toString(), user.getId().toString());if (obj != null) {return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID, "已点赞");}// 保存当前keylog.info("保存当前key:{} ,{}, {}", dto.getArticleId(), user.getId(), dto);cacheService.hPut(BehaviorConstants.LIKE_BEHAVIOR + dto.getArticleId().toString(), user.getId().toString(), JSON.toJSONString(dto));mess.setAdd(1);} else {// 删除当前keylog.info("删除当前key:{}, {}", dto.getArticleId(), user.getId());cacheService.hDelete(BehaviorConstants.LIKE_BEHAVIOR + dto.getArticleId().toString(), user.getId().toString());mess.setAdd(-1);}//发送消息,数据聚合kafkaTemplate.send(HotArticleConstants.HOT_ARTICLE_SCORE_TOPIC,JSON.toJSONString(mess));return ResponseResult.okResult(AppHttpCodeEnum.SUCCESS);}/*** 检查参数** @return*/private boolean checkParam(LikesBehaviorDto dto) {if (dto.getType() > 2 || dto.getType() < 0 || dto.getOperation() > 1 || dto.getOperation() < 0) {return true;}return false;}
}

heima-leadnews-article->config

package com.heima.article.config;import lombok.Getter;
import lombok.Setter;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsConfig;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafkaStreams;
import org.springframework.kafka.annotation.KafkaStreamsDefaultConfiguration;
import org.springframework.kafka.config.KafkaStreamsConfiguration;import java.util.HashMap;
import java.util.Map;/*** 通过重新注册KafkaStreamsConfiguration对象,设置自定配置参数*/@Setter
@Getter
@Configuration
@EnableKafkaStreams
@ConfigurationProperties(prefix="kafka")
public class KafkaStreamConfig {private static final int MAX_MESSAGE_SIZE = 16* 1024 * 1024;private String hosts;private String group;@Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)public KafkaStreamsConfiguration defaultKafkaStreamsConfig() {Map<String, Object> props = new HashMap<>();props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, hosts);props.put(StreamsConfig.APPLICATION_ID_CONFIG, this.getGroup()+"_stream_aid");props.put(StreamsConfig.CLIENT_ID_CONFIG, this.getGroup()+"_stream_cid");props.put(StreamsConfig.RETRIES_CONFIG, 10);props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());return new KafkaStreamsConfiguration(props);}
}

heima-leadnews-article->HotArticleStreamHandler

package com.heima.article.stream;import com.alibaba.fastjson.JSON;
import com.heima.common.constants.HotArticleConstants;
import com.heima.model.mess.ArticleVisitStreamMess;
import com.heima.model.mess.UpdateArticleMess;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang.StringUtils;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.kstream.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;import java.time.Duration;@Configuration
@Slf4j
public class HotArticleStreamHandler {@Beanpublic KStream<String,String> kStream(StreamsBuilder streamsBuilder){//接收消息KStream<String,String> stream = streamsBuilder.stream(HotArticleConstants.HOT_ARTICLE_SCORE_TOPIC);//聚合流式处理stream.map((key,value)->{UpdateArticleMess mess = JSON.parseObject(value,UpdateArticleMess.class);//重置消息的key:文章id和value:行为类型:数量 LIKE:0return new KeyValue<>(mess.getArticleId().toString(),mess.getType().name()+":"+mess.getAdd());})//按照文章id进行聚合.groupBy((key,value)->key)//时间窗口.windowedBy(TimeWindows.of(Duration.ofSeconds(10)))/*** 自行的完成聚合的计算*/.aggregate(new Initializer<String>() {/*** 初始方法,返回值是消息的value* @return*/@Overridepublic String apply() {return "COLLECTION:0,COMMENT:0,LIKES:0,VIEWS:0";}/*** 真正的聚合操作,返回值是消息的value*/}, new Aggregator<String, String, String>() {@Overridepublic String apply(String key, String value, String aggValue) {if(StringUtils.isBlank(value)){return aggValue;}String[] aggAry = aggValue.split(",");int col = 0,com = 0,lik = 0,vie = 0;for (String agg : aggAry) {String[] split = agg.split(":");/*** 获得初始值,也是时间窗口内计算之后的值*/switch (UpdateArticleMess.UpdateArticleType.valueOf(split[0])){case COLLECTION:col = Integer.parseInt(split[1]);break;case COMMENT:com = Integer.parseInt(split[1]);break;case LIKES:lik = Integer.parseInt(split[1]);break;case VIEWS:vie = Integer.parseInt(split[1]);break;}}/*** 累加操作*/String[] valAry = value.split(":");switch (UpdateArticleMess.UpdateArticleType.valueOf(valAry[0])){case COLLECTION:col += Integer.parseInt(valAry[1]);break;case COMMENT:com += Integer.parseInt(valAry[1]);break;case LIKES:lik += Integer.parseInt(valAry[1]);break;case VIEWS:vie += Integer.parseInt(valAry[1]);break;}String formatStr = String.format("COLLECTION:%d,COMMENT:%d,LIKES:%d,VIEWS:%d",col,com,lik,vie);System.out.println("文章的id:"+key);System.out.println("当前时间窗口内的消息处理结果:"+formatStr);return formatStr;}//当前流式处理的状态,如果有多个流式处理,保证不一样即可}, Materialized.as("hot-article-stream-count-001")).toStream().map((key,value)->{return new KeyValue<>(key.key().toString(),formatObj(key.key().toString(),value));})//发送消息.to(HotArticleConstants.HOT_ARTICLE_INCR_HANDLE_TOPIC);return stream;}/*** 格式化消息的value数据* @param articleId* @param value* @return*/private String formatObj(String articleId, String value) {ArticleVisitStreamMess mess = new ArticleVisitStreamMess();mess.setArticleId(Long.valueOf(articleId));//COLLECTION:%d,COMMENT:%d,LIKES:%d,VIEWS:%dString[] valAry = value.split(",");for (String val : valAry) {String[] split = val.split(":");switch (UpdateArticleMess.UpdateArticleType.valueOf(split[0])){case COLLECTION:mess.setCollect(Integer.parseInt(split[1]));break;case COMMENT:mess.setComment(Integer.parseInt(split[1]));break;case LIKES:mess.setLike(Integer.parseInt(split[1]));break;case VIEWS:mess.setView(Integer.parseInt(split[1]));break;}}log.info("聚合消息处理之后和结果为:{}",JSON.toJSONString(mess));return JSON.toJSONString(mess);}
}

heima-leadnews-article->ArticleIncrHandleListener

@Component
@Slf4j
public class ArticleIncrHandleListener {@Autowiredprivate ApArticleService apArticleService;@KafkaListener(topics = HotArticleConstants.HOT_ARTICLE_INCR_HANDLE_TOPIC )public void onMessage(String mess){if(StringUtils.isNotBlank(mess)){ArticleVisitStreamMess articleVisitStreamMess = JSON.parseObject(mess, ArticleVisitStreamMess.class);apArticleService.updateScore(articleVisitStreamMess);}}}

 遇到问题:将文章ApArticle列表数据返回给前端后,到达前端的ApArticle列表的文章id前后不一致

原因:ApArticle的id类型为long,而long类型数据在网络传输时会丢失精度,所以末尾出现0

解决方案:jackson进行序列化和反序列化解决

  • 当后端响应给前端的数据中包含了id或者特殊标识(可自定义)的时候,把当前数据进行转换为String类型

  • 当前端传递后后端的dto中有id或者特殊标识(可自定义)的时候,把当前数据转为Integer或Long类型。

特殊标识类说明:

IdEncrypt 自定义注解 作用在需要转换类型的字段属性上,用于非id的属性上 在model包下

package com.heima.model.common.annotation;import com.fasterxml.jackson.annotation.JacksonAnnotation;import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;@JacksonAnnotation
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.FIELD, ElementType.METHOD, ElementType.PARAMETER})
public @interface IdEncrypt {
}

序列化和反序列化类说明:以下类理解为主,可直接在资料文件夹下拷贝到leadnews-common模块中使用。

  • ConfusionSerializer 用于序列化自增数字的混淆

public class ConfusionSerializer extends JsonSerializer<Object> {@Overridepublic  void serialize(Object value, JsonGenerator jsonGenerator, SerializerProvider serializers) throws IOException {try {if (value != null) {jsonGenerator.writeString(value.toString());return;}}catch (Exception e){e.printStackTrace();}serializers.defaultSerializeValue(value, jsonGenerator);}
}
  • ConfusionDeserializer 用于反序列化自增数字的混淆解密

    public class ConfusionDeserializer extends JsonDeserializer<Object> {
    ​JsonDeserializer<Object>  deserializer = null;JavaType type =null;
    ​public  ConfusionDeserializer(JsonDeserializer<Object> deserializer, JavaType type){this.deserializer = deserializer;this.type = type;}
    ​@Overridepublic  Object deserialize(JsonParser p, DeserializationContext ctxt)throws IOException{try {if(type!=null){if(type.getTypeName().contains("Long")){return Long.valueOf(p.getValueAsString());}if(type.getTypeName().contains("Integer")){return Integer.valueOf(p.getValueAsString());}}return IdsUtils.decryptLong(p.getValueAsString());}catch (Exception e){if(deserializer!=null){return deserializer.deserialize(p,ctxt);}else {return p.getCurrentValue();}}}
    }
  • ConfusionSerializerModifier 用于过滤序列化时处理的字段

public class ConfusionSerializerModifier extends BeanSerializerModifier {@Overridepublic List<BeanPropertyWriter> changeProperties(SerializationConfig config,BeanDescription beanDesc, List<BeanPropertyWriter> beanProperties) {List<BeanPropertyWriter> newWriter = new ArrayList<>();for(BeanPropertyWriter writer : beanProperties){String name = writer.getType().getTypeName();if(null == writer.getAnnotation(IdEncrypt.class) && !writer.getName().equalsIgnoreCase("id")){newWriter.add(writer);} else {writer.assignSerializer(new ConfusionSerializer());newWriter.add(writer);}}return newWriter;}
}
  • ConfusionDeserializerModifier 用于过滤反序列化时处理的字段

    public class ConfusionDeserializerModifier extends BeanDeserializerModifier {
    ​@Overridepublic BeanDeserializerBuilder updateBuilder(final DeserializationConfig config, final BeanDescription beanDescription, final BeanDeserializerBuilder builder) {Iterator it = builder.getProperties();
    ​while (it.hasNext()) {SettableBeanProperty p = (SettableBeanProperty) it.next();if ((null != p.getAnnotation(IdEncrypt.class)||p.getName().equalsIgnoreCase("id"))) {JsonDeserializer<Object> current = p.getValueDeserializer();builder.addOrReplaceProperty(p.withValueDeserializer(new ConfusionDeserializer(p.getValueDeserializer(),p.getType())), true);}}return builder;}
    }
  • ConfusionModule 用于注册模块和修改器

    public class ConfusionModule extends Module {
    ​public final static String MODULE_NAME = "jackson-confusion-encryption";public final static Version VERSION = new Version(1,0,0,null,"heima",MODULE_NAME);
    ​@Overridepublic String getModuleName() {return MODULE_NAME;}
    ​@Overridepublic Version version() {return VERSION;}
    ​@Overridepublic void setupModule(SetupContext context) {context.addBeanSerializerModifier(new ConfusionSerializerModifier());context.addBeanDeserializerModifier(new ConfusionDeserializerModifier());}
    ​/*** 注册当前模块* @return*/public static ObjectMapper registerModule(ObjectMapper objectMapper){//CamelCase策略,Java对象属性:personId,序列化后属性:persionId//PascalCase策略,Java对象属性:personId,序列化后属性:PersonId//SnakeCase策略,Java对象属性:personId,序列化后属性:person_id//KebabCase策略,Java对象属性:personId,序列化后属性:person-id// 忽略多余字段,抛错objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
    //        objectMapper.setPropertyNamingStrategy(PropertyNamingStrategy.SNAKE_CASE);return objectMapper.registerModule(new ConfusionModule());}
    ​
    }
  • InitJacksonConfig 提供自动化配置默认ObjectMapper,让整个框架自动处理id混淆

    @Configuration
    public class InitJacksonConfig {
    ​@Beanpublic ObjectMapper objectMapper() {ObjectMapper objectMapper = new ObjectMapper();objectMapper = ConfusionModule.registerModule(objectMapper);return objectMapper;}
    ​
    }

在common模块中的自动配置的spring.factories中添加如下内容

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\com.heima.common.swagger.SwaggerConfiguration,\com.heima.common.swagger.Swagger2Configuration,\com.heima.common.exception.ExceptionCatch,\com.heima.common.aliyun.GreenTextScan,\com.heima.common.aliyun.GreenImageScan,\com.heima.common.jackson.InitJacksonConfig

在dto中传递参数的时候如果想要把数值类型转为json,可以使用@IdEncrypt标识字段进行转换,如下:

@Data
public class ArticleInfoDto {// 文章ID@IdEncryptLong articleId;
}


 

 

 

Jenkins介绍

 

Jenkins 是一款流行的开源持续集成(Continuous Integration)工具,广泛用于项目开发,具有自动化构建、测试和部署等功能。官网: Jenkins。

Jenkins的特征

  • 开源的 Java语言开发持续集成工具,支持持续集成,持续部署。

  • 易于安装部署配置:可通过 yum安装,或下载war包以及通过docker容器等快速实现安装部署,可方便web界面配置管理。

  • 消息通知及测试报告:集成 RSS/E-mail通过RSS发布构建结果或当构建完成时通过e-mail通知,生成JUnit/TestNG测试报告。

  • 分布式构建:支持 Jenkins能够让多台计算机一起构建/测试。

  • 文件识别: Jenkins能够跟踪哪次构建生成哪些jar,哪次构建使用哪个版本的jar等。

  • 丰富的插件支持:支持扩展插件,你可以开发适合自己团队使用的工具,如 git,svn,maven,docker等。

 Jenkins安装配置

  •  首先服务器需要安装java环境(JDK11以上)
  •  去Jenkins官网Jenkins下载Jenkins.war包

 

  • 将jenkins.war包上传到服务器后通过Java命令运行jenkins.war包,启动jenkins服务,这里顺便指定了端口号为16060。

  •  如果在Centos7 上Open JDK 11 安装启动Jenkins的时候报了一个错误:
hudson.util.AWTProblem at hudson.WebAppMain.contextInitialized(WebAppMain.java:251)
  • 解决方案:
yum install fontconfig
  • 再次启动查看成功启动的日志,提醒管理员账户已经自动帮我们创建成功了,并生成了密钥。

  • 浏览器首次访问Jenkins服务,就需要将这个密钥进行输入才能登录

按默认设置,把建议的插件都安装上  

这一步等待时间较长, 安装完成之后, 创建管理员用户:  

配置访问地址:

 配置完成之后, 会进行重启, 之后可以看到管理后台:

 

 

Jenkins工具配置

1.进入【系统管理】--> 【全局工具配置】

 2.MAVEN配置全局设置

3. 指定JDK配置

4.指定MAVEN 目录

5.指定DOCKER目录

 如果不清楚docker的安装的目录,可以使用whereis docker 命令查看docker的安装的目录

 

每个微服务都引入该依赖,以heima-leadnews-user微服务为例

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><parent><artifactId>heima-leadnews-service</artifactId><groupId>com.heima</groupId><version>1.0-SNAPSHOT</version></parent><modelVersion>4.0.0</modelVersion>
​<artifactId>heima-leadnews-user</artifactId>
​<properties><maven.compiler.source>8</maven.compiler.source><maven.compiler.target>8</maven.compiler.target><docker.image>docker_storage</docker.image></properties>
​<build><finalName>heima-leadnews-user</finalName><plugins><plugin><groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId><executions><execution><goals><goal>repackage</goal></goals></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.7.0</version><configuration><source>${java.version}</source><target>${java.version}</target></configuration></plugin><plugin><groupId>com.spotify</groupId><artifactId>dockerfile-maven-plugin</artifactId><version>1.3.6</version><configuration><repository>${docker.image}/${project.artifactId}</repository><buildArgs><JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE></buildArgs></configuration></plugin></plugins></build>
​
</project>

服务集成Dockerfile文件

# 设置JAVA版本
FROM java:8
# 指定存储卷, 任何向/tmp写入的信息都不会记录到容器存储层
VOLUME /tmp
# 拷贝运行JAR包
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
# 设置JVM运行参数, 这里限定下内存大小,减少开销
ENV JAVA_OPTS="\
-server \
-Xms256m \
-Xmx512m \
-XX:MetaspaceSize=256m \
-XX:MaxMetaspaceSize=512m"
#空参数,方便创建容器时传参
ENV PARAMS=""
# 入口点, 执行JAVA运行命令
ENTRYPOINT ["sh","-c","java -jar $JAVA_OPTS /app.jar $PARAMS"]

利用Jenkins编译打包时可能会因为镜像源失效而导致拉取镜像不成功的编译失败问题,更新 Dockerfile 中的基础镜像,将 FROM java:8 中的 java:8 替换为所需的镜像版本。例如,如果要使用 JDK 8,可以将其更改为 FROM openjdk:8-jdk


 


遇到问题:Jenkins项目编译不成功。

原因:在Jenkins服务器上,Jenkins通过maven编译打包到服务器本地仓库的依赖会有可能损坏或不完整,就算删除重新编译打包也会有相同的问题。

解决方案:将本地maven仓库对应的依赖包替换掉Jenkins服务器上maven本地仓库的可能损坏或不完整的依赖包,甚至替换掉依赖相关的整个文件夹。因为本地maven仓库的项目测试是没有问题的。


clean install -Dmaven.test.skip=true  dockerfile:build -f heima-leadnews/heima-leadnews-service/heima-leadnews-user/pom.xml

if [ -n  "$(docker ps -a -f  name=$JOB_NAME  --format '{{.ID}}' )" ]then#删除之前的容器docker rm -f $(docker ps -a -f  name=$JOB_NAME  --format '{{.ID}}' )
fi# 清理镜像
docker image prune -f # 启动docker服务
docker run -d --net=host -e PARAMS="--spring.profiles.active=prod"  --name $JOB_NAME docker_storage/$JOB_NAME

 构建镜像

 清理容器,创建新的容器

 可以发现服务已经通过docker容器的方式启动


 

 安装配置私有仓库

对于持续集成环境的配置,Jenkins会发布大量的微服务, 要与多台机器进行交互, 可以采用docker镜像的保存与导出功能结合SSH实现, 但这样交互繁琐,稳定性差, 而且不便管理, 这里我们通过搭建Docker的私有仓库来实现, 这个有点类似GIT仓库, 集中统一管理资源, 由客户端拉取或更新。

  1. 下载最新Registry镜像

    docker pull registry:latest
  2. 启动Registry镜像服务

    docker run -d -p 5000:5000 --name registry -v /usr/local/docker/registry:/var/lib/registry registry:latest

    映射5000端口; -v是将Registry内的镜像数据卷与本地文件关联, 便于管理和维护Registry内的数据。

  3. 查看仓库资源

    访问地址:http://192.168.200.100:5000/v2/_catalog

    启动正常, 可以看到返回:

    {"repositories":[]}

    目前并没有上传镜像, 显示空数据。

    如果上传成功, 可以看到数据:

jenkins中安装插件

jenkins系统配置远程服务器链接

位置:Manage Jenkins-->Configure System

需要添加凭证

位置:Manage Jenkins-->Manage CreDentials

添加链接到130服务器的用户名和密码

 

 

maven命令

clean install -Dmaven.test.skip=true dockerfile:build -f heima-leadnews/heima-leadnews-service/heima-leadnews-article/pom.xml

shell脚本

image_tag=$docker_registry/docker_storage/$JOB_NAME
echo '================docker镜像清理================'
if [ -n  "$(docker ps -a -f  name=$JOB_NAME  --format '{{.ID}}' )" ]then#删除之前的容器docker rm -f $(docker ps -a -f  name=$JOB_NAME  --format '{{.ID}}' )
fi# 清理镜像
docker image prune -f 
​
# 创建TAG
docker tag docker_storage/$JOB_NAME $image_tag
echo '================docker镜像推送================'
# 推送镜像
docker push $image_tag
# 删除TAG
docker rmi $image_tag
echo '================docker tag 清理 ================'

 远程服务器执行的shell脚本

echo '================拉取最新镜像================'
docker pull $docker_registry/docker_storage/$JOB_NAMEecho '================删除清理容器镜像================'
if [ -n  "$(docker ps -a -f  name=$JOB_NAME  --format '{{.ID}}' )" ]then#删除之前的容器docker rm -f $(docker ps -a -f  name=$JOB_NAME  --format '{{.ID}}' )
fi# 清理镜像
docker image prune -f echo '===============启动容器================'
docker run -d   --net=host -e PARAMS="--spring.profiles.active=prod" --name $JOB_NAME $docker_registry/docker_storage/$JOB_NAME


 遇到问题:镜像推送dockers仓库失败

received unexpected HTTP status: 500 Internal Server Error
Build step 'Execute shell' marked build as failure
Finished: FAILURE

原因:是由于selinux未关闭导致docker出现异常情况

解决方案:

关闭selinux

临时关闭

[root@ip-10-0-1-46 ~]# setenforce 0

或永久关闭

[root@ip-10-0-1-46 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

重新push


 遇到问题:SSH拉取docker仓库镜像失败

echo '===============启动容器================'
docker run -d   --net=host -e PARAMS="--spring.profiles.active=prod" --name $JOB_NAME $docker_registry/docker_storage/$JOB_NAME[SSH] executing...
Error response from daemon: Get "https://192.168.136.153:5000/v2/": http: server gave HTTP response to HTTPS client
Unable to find image '192.168.136.153:5000/docker_storage/heima-leadnews-article:latest' locally
docker: Error response from daemon: Get "https://192.168.136.153:5000/v2/": http: server gave HTTP response to HTTPS client.
See 'docker run --help'.
================拉取最新镜像================
Using default tag: latest
================删除清理容器镜像================
Total reclaimed space: 0B
===============启动容器================[SSH] completed
[SSH] exit-status: 125Build step 'Execute shell script on remote host using ssh' marked build as failure
Finished: FAILURE
​

原因: 拉取端服务器的docker没有配置指向安装Registry的服务IP与端口,也就是说,不管是推送服务器的docker或者拉取服务器的docker都需要配置指向安装Registry的服务IP与端口。

解决方案:

先确保持续集成环境的机器已安装好Docker客户端, 然后做以下修改:

vi /lib/systemd/system/docker.service

修改内容:

ExecStart=/usr/bin/dockerd --insecure-registry=192.168.136.153:5000

指向安装Registry的服务IP与端口。

重启生效:

systemctl daemon-reload
systemctl restart docker.service 


 遇到问题:

javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)at sun.security.ssl.HandshakeContext.<init>(HandshakeContext.java:171) ~[na:1.8.0_292]at sun.security.ssl.ClientHandshakeContext.<init>(ClientHandshakeContext.java:98) ~[na:1.8.0_292]at sun.security.ssl.TransportContext.kickstart(TransportContext.java:220) ~[na:1.8.0_292]at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:428) ~[na:1.8.0_292]at com.mysql.cj.protocol.ExportControlled.performTlsHandshake(ExportControlled.java:316) ~[mysql-connector-java-8.0.17.jar:8.0.17]at com.mysql.cj.protocol.StandardSocketFactory.performTlsHandshake(StandardSocketFactory.java:188) ~[mysql-connector-java-8.0.17.jar:8.0.17]at com.mysql.cj.protocol.a.NativeSocketConnection.performTlsHandshake(NativeSocketConnection.java:99) ~[mysql-connector-java-8.0.17.jar:8.0.17]at com.mysql.cj.protocol.a.NativeProtocol.negotiateSSLConnection(NativeProtocol.java:331) ~[mysql-connector-java-8.0.17.jar:8.0.17]... 68 common frames omitted

原因:在Java8及高版本以上的版本在调用ssl时会出现javax.net.ssl.SSLHandshakeException: No appropriate protocol的异常。

解决方案一:

 jdbc-url中添加配置useSSL=false

spring:datasource:driver-class-name: com.mysql.jdbc.Driverurl: jdbc:mysql://192.168.136.152:3306/leadnews_article?useUnicode=true&characterEncoding=UTF-8&serverTimezone=UTC&useSSL=false 

解决方案二:

Java\jre里面的lib\security 下面有个java.security。找到对应的SSLv3,删除掉,重启项目就好了。(删掉SSLv3就是允许SSL调用)

然后重启java服务 


 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/20847.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

1313: [蓝桥杯2016决赛]赢球票

题目 感觉这题的意思比较难懂&#xff0c;题目比较简单&#xff0c;直接模拟就过了 题意&#xff1a;每次都只取出一个&#xff0c;好像和约瑟夫环有点像 AC代码&#xff1a; package 蓝桥杯2016; import java.util.*; public class 赢球票 {static Scanner scnew Scanner(Sy…

1313: [蓝桥杯2016决赛]赢球票 (模拟)

一道没什么毛病的模拟题&#xff0c;但是脑残的我居然看别人写的代码&#xff0c;看懂了感觉&#xff0c;但是最后自己打的时候觉得不怎么对&#xff0c;有一条语句看了半天也没看懂&#xff0c;搞我心态。。。 建议大家没看懂的话就别写博客了&#xff0c;真的害人&#xff0…

蓝桥杯:赢球票

题目链接 目录 题目描述 输入描述 输出描述 输入输出样例 样例1 样例2 题目分析&#xff1a; 样例1&#xff1a; 样例2&#xff1a; 整体思路&#xff1a; AC代码&#xff08;Java&#xff09;: 题目描述 某机构举办球票大奖赛。获奖选手有机会赢得若干张球票。 主持人拿…

【学习周报】

学习内容&#xff1a; instructGPTVLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning 学习时间&#xff1a; 1.9 ~ 1.14 遇到的问题&#xff1a; 强化学习策略的使用&#xff1a;只看懂了HMN代码&#xff0c;没有完全看懂VPM中…

【字节跳动】 https://job.bytedance.com/campus 内推码:MZ9BJHD 电话:13855119574 邮箱:yinxiang.stephen@bytedance.com

这里写自定义目录标题 欢迎使用Markdown编辑器新的改变功能快捷键合理的创建标题&#xff0c;有助于目录的生成如何改变文本的样式插入链接与图片如何插入一段漂亮的代码片生成一个适合你的列表创建一个表格设定内容居中、居左、居右SmartyPants 创建一个自定义列表如何创建一个…

「GoTeam 招聘时间」字节跳动-代码中台团队招聘

本期招聘企业—字节跳动 公司团队简介 我们是字节跳动—基础架构—效能团队&#xff0c;通过探索更好的开发理念和开发实践&#xff0c;打造优秀的产品&#xff0c;改善开发者体验&#xff0c;持续提升研发效率及质量。 目前团队主要专注的方向包括 DevOps、代码托管、Cloud …

聊一聊字节跳动的面试

来源&#xff1a;https://zhuanlan.zhihu.com/p/82871762 一、算法题 一面&#xff1a; 1. lc 里最长上升子序列的变形题 2. 实现输入英文单词联想的功能 二面&#xff1a; 1.矩阵旋转&#xff0c;要求空间复杂度 O(1) 2.无序的数组的中位数。要求时间复杂度尽可能的小 二、计…

字节跳动架构师讲解Android开发!已成功拿下字节、腾讯、脉脉offer,含BATJM大厂

开头 程序员面试&#xff0c;除了面试技术外&#xff0c;有的公司经常会问应聘者和技术无关的问题&#xff0c;考验求职者的综合能力&#xff0c;并以此作为是否录用的依据&#xff0c;很多时候这类问题往往没有标准答案&#xff0c;就看求应聘者临场的反应能力如何。 张工是…

面试字节跳动,我被怼了。

作者丨三级狗 https://www.zhihu.com/question/31225105/answer/582508111 人们都说&#xff0c;这个世界上有两种人注定单身&#xff0c;一种是太优秀的&#xff0c;另一种是太平凡的。 我一听 呀&#xff1f;那我这岂不是就不优秀了吗&#xff0c;于是毅然决然和女朋友分了手…

一张图对比在腾讯、阿里、字节跳动工作的区别

本文经 BAT &#xff08;id:batfun&#xff09;授权转载 互联网人爱相互跳槽&#xff0c;腾讯和阿里一直相互流动&#xff0c;近两年势头强劲的字节跳动也成为跳槽热门去向&#xff0c;那么在这三家公司工作有什么区别呢&#xff1f;一起来看—— 旗舰产品 - 擅长领域 - 腾讯&a…

是的,阿里P7,腾讯T4,字节跳动总监都是你家亲戚!!!都在帮你们整理资料!!!

缘起 最近网上出现最多的文章就是&#xff0c;阿里P7大佬熬夜整理某资料&#xff0c;腾讯T4大佬良心分享某资料&#xff0c;字节总监耗时多少天整理的某资料&#xff0c;我笑了&#xff0c;这些大佬都是你家亲戚么&#xff0c;都在帮你们整理资料去了&#xff0c;都闲着没事干…

Android菜菜进字节跳动,居然是看了这个......

谈谈我的真实感受吧&#xff5e; 程序员真的是需要将终生学习贯彻到底的职业&#xff0c;一旦停止学习&#xff0c;离被淘汰&#xff0c;也就不远了。 金三银四、金九银十跳槽季&#xff0c;这是一个千年不变的话题&#xff0c;每到这个时候&#xff0c;很多人都会临阵磨枪&a…

QNAP严重漏洞可导致恶意代码注入

聚焦源代码安全&#xff0c;网罗国内外最新资讯&#xff01; 编译&#xff1a;代码卫士 QNAP提醒客户安装QTS和QuTS固件更新。该更新修复了一个严重漏洞 (CVE-2022-27596)&#xff0c;可导致远程攻击者在QNAP NAS设备上注入恶意代码。 该漏洞是“严重”级别的漏洞&#xff0c;C…

我和ChatGPT的对话记录

今日份调教&#xff08;他说他是GPT3&#xff09; 从莫种意义上来说&#xff0c;我撅得我还是有一手的&#xff0c;噗嗤 &#x1f60e; **

推荐4款非常实用的电脑软件

你是否曾经在使用电脑的过程中遇到过各种各样的问题&#xff1f;本文将为您推荐4款小众但非常实用的软件&#xff0c;或许能帮助您解决这些问题。 1.格式工厂 格式工厂是一款功能全面的格式转换软件&#xff0c;支持转换几乎所有主流的多媒体文件格式&#xff0c;包括视频 &a…

含泪推荐四款超级好用的电脑软件,值得收藏

1.极速下载工具—Internet Download Manager&#xff08;IDM&#xff09; Internet Download Manager简称IDM&#xff0c;是一款老牌的Windows系统下载工具&#xff0c;支持多媒体下载、自动捕获链接、自动识别文件名、静默下载、批量下载、站点抓取、视频下载等多种个性化的功…

这几款实用的电脑软件推荐给你

软件一&#xff1a;TeamViewer TeamViewer是一款跨平台的远程控制软件&#xff0c;它可以帮助用户远程访问和控制其他计算机、服务器、移动设备等&#xff0c;并且支持文件传输、会议功能等。 TeamViewer的主要功能包括&#xff1a; 远程控制&#xff1a;支持远程访问和控制…

亚马逊、eBay、沃尔玛、OZON、速卖通等平台如何实现自养号测评补单

现如今&#xff0c;跨境电商可谓是举步维艰&#xff0c;运营环境也是越来越复杂。但复杂的环境可以用两个字来概括买和刷。因为进行买卖或者补单从而增加销售促进排名&#xff0c;然后提高产品的权重。其实无论是销量还是评论不仅可以通过自然购买产生&#xff0c;也可以进行一…

亚马逊跨境商家会用的邮件管理软件—解孵

做亚马逊的朋友&#xff0c;在平时的运营中需要及时地回复邮件&#xff0c;邮件回复是否及时会影响到好评率和销量&#xff0c;所以亚马逊商家需要在24小时内回复邮件到买家。其实回复邮件并不难&#xff0c;困难的是在邮件过多或店铺过多的情况下&#xff0c;商家可能会漏回或…

亚马逊买家号二步验证怎么设置?

亚马逊提供了多种安全功能&#xff0c;其中包括买家账号的二步验证。启用二步验证可以提供额外的账户安全性&#xff0c;以确保只有经过授权的用户可以访问您的亚马逊买家账号。 要启用亚马逊买家账号的二步验证&#xff0c;请按照以下步骤进行操作&#xff1a; 1、登录亚马逊…