文章作者:jqpeng
原文链接: spring boot实现超轻量级网关(反向代理、转发)

在我们的rest服务中,需要暴露一个中间件的接口给用户,但是需要经过rest服务的认证,这是典型的网关使用场景。可以引入网关组件来搞定,但是引入zuul等中间件会增加系统复杂性,这里实现一个超轻量级的网关,只实现请求转发,认证等由rest服务的spring security来搞定。

如何进行请求转发呢? 熟悉网络请求的同学应该很清楚,请求无非就是请求方式、HTTP header,以及请求body,我们将这些信息取出来,透传给转发的url即可。

举例:

/graphdb/** 转发到 Graph_Server/**

获取转发目的地址:

private String createRedictUrl(HttpServletRequest request, String routeUrl, String prefix) {
        String queryString = request.getQueryString();
        return routeUrl + request.getRequestURI().replace(prefix, "") +
                (queryString != null ? "?" + queryString : "");
    }

解析请求头和内容

然后从request中提取出header、body等内容,构造一个RequestEntity,后续可以用RestTemplate来请求。

private RequestEntity createRequestEntity(HttpServletRequest request, String url) throws URISyntaxException, IOException {
        String method = request.getMethod();
        HttpMethod httpMethod = HttpMethod.resolve(method);
        MultiValueMap<String, String> headers = parseRequestHeader(request);
        byte[] body = parseRequestBody(request);
        return new RequestEntity<>(body, headers, httpMethod, new URI(url));
    }


    private byte[] parseRequestBody(HttpServletRequest request) throws IOException {
        InputStream inputStream = request.getInputStream();
        return StreamUtils.copyToByteArray(inputStream);
    }

    private MultiValueMap<String, String> parseRequestHeader(HttpServletRequest request) {
        HttpHeaders headers = new HttpHeaders();
        List<String> headerNames = Collections.list(request.getHeaderNames());
        for (String headerName : headerNames) {
            List<String> headerValues = Collections.list(request.getHeaders(headerName));
            for (String headerValue : headerValues) {
                headers.add(headerName, headerValue);
            }
        }
        return headers;
    }

透明转发

最后用RestTemplate来实现请求:

 private ResponseEntity<String> route(RequestEntity requestEntity) {
        RestTemplate restTemplate = new RestTemplate();
        return restTemplate.exchange(requestEntity, String.class);
    }

全部代码

以下是轻量级转发全部代码:

import org.springframework.http.*;
import org.springframework.stereotype.Service;
import org.springframework.util.MultiValueMap;
import org.springframework.util.StreamUtils;
import org.springframework.web.client.RestTemplate;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.Collections;
import java.util.List;

@Service
public class RoutingDelegate {


    public ResponseEntity<String> redirect(HttpServletRequest request, HttpServletResponse response,String routeUrl, String prefix) {
        try {
            // build up the redirect URL
            String redirectUrl = createRedictUrl(request,routeUrl, prefix);
            RequestEntity requestEntity = createRequestEntity(request, redirectUrl);
            return route(requestEntity);
        } catch (Exception e) {
            return new ResponseEntity("REDIRECT ERROR", HttpStatus.INTERNAL_SERVER_ERROR);
        }
    }

    private String createRedictUrl(HttpServletRequest request, String routeUrl, String prefix) {
        String queryString = request.getQueryString();
        return routeUrl + request.getRequestURI().replace(prefix, "") +
                (queryString != null ? "?" + queryString : "");
    }


    private RequestEntity createRequestEntity(HttpServletRequest request, String url) throws URISyntaxException, IOException {
        String method = request.getMethod();
        HttpMethod httpMethod = HttpMethod.resolve(method);
        MultiValueMap<String, String> headers = parseRequestHeader(request);
        byte[] body = parseRequestBody(request);
        return new RequestEntity<>(body, headers, httpMethod, new URI(url));
    }
    private ResponseEntity<String> route(RequestEntity requestEntity) {
        RestTemplate restTemplate = new RestTemplate();
        return restTemplate.exchange(requestEntity, String.class);
    }


    private byte[] parseRequestBody(HttpServletRequest request) throws IOException {
        InputStream inputStream = request.getInputStream();
        return StreamUtils.copyToByteArray(inputStream);
    }

    private MultiValueMap<String, String> parseRequestHeader(HttpServletRequest request) {
        HttpHeaders headers = new HttpHeaders();
        List<String> headerNames = Collections.list(request.getHeaderNames());
        for (String headerName : headerNames) {
            List<String> headerValues = Collections.list(request.getHeaders(headerName));
            for (String headerValue : headerValues) {
                headers.add(headerName, headerValue);
            }
        }
        return headers;
    }
}

Spring 集成

Spring Controller,RequestMapping里把GET \ POST\PUT\DELETE 支持的请求带上,就能实现转发了。

@RestController
@RequestMapping(GraphDBController.DELEGATE_PREFIX)
@Api(value = "GraphDB", tags = {
        "graphdb-Api"
})
public class GraphDBController {

    @Autowired
    GraphProperties graphProperties;

    public final static String DELEGATE_PREFIX = "/graphdb";

    @Autowired
    private RoutingDelegate routingDelegate;

    @RequestMapping(value = "/**", method = {RequestMethod.GET, RequestMethod.POST, RequestMethod.PUT, RequestMethod.DELETE}, produces = MediaType.TEXT_PLAIN_VALUE)
    public ResponseEntity catchAll(HttpServletRequest request, HttpServletResponse response) {
        return routingDelegate.redirect(request, response, graphProperties.getGraphServer(), DELEGATE_PREFIX);
    }
}

作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: hugegraph 支持sparql 与cypher

hugegraph 是百度开源的基于tinkerpop的图数据库,支持通过gremlin进行查询。

这里,我们来扩展支持sparql 与cypher。

sparql支持

github上有SparqlToGremlinCompiler,可以支持将sparql转GraphTraversal,集成该工具库即可:

@Path("graphs/{graph}/sparql")
@Singleton
public class SparqlAPI extends API {

    private static final Logger LOG = Log.logger(SparqlAPI.class);

    @GET
    @Timed
    @CompressInterceptor.Compress
    @Produces(APPLICATION_JSON_WITH_CHARSET)
    public String query(@Context GraphManager manager,
                        @PathParam("graph") String graph,
                        @QueryParam("sparql") String sparql) {
        LOG.debug("Graph [{}] query by sparql: {}", graph, sparql);

        E.checkArgument(sparql != null && !sparql.isEmpty(),
                "The sparql parameter can't be null or empty");

        HugeGraph g = graph(manager, graph);
        GraphTraversal<Vertex, ?> traversal = SparqlToGremlinCompiler.convertToGremlinTraversal(g, sparql);
        List<?> result = traversal.toList();
        if (result.size() > 0) {
            Object item = result.get(0);
            if (item instanceof Vertex) {
                return manager.serializer(g).writeVertices((Iterator<Vertex>) result.iterator(), false);
            }
            if (item instanceof Map) {
                return manager.serializer(g).writeMap((Map) item);
            }
        }

        return result.toString();
    }
}

cypher 支持

opencypher 提供了translation包,支持将cypher转为gremlin:

        <dependency>
            <groupId>org.opencypher.gremlin</groupId>
            <artifactId>translation</artifactId>
            <version>1.0.4</version>
        </dependency>

转换代码:

 TranslationFacade cfog = new TranslationFacade();
 String gremlin = cfog.toGremlinGroovy(cypher);

增加api代码

     @GET
    @Timed
    @CompressInterceptor.Compress
    @Produces(APPLICATION_JSON_WITH_CHARSET)
    public Response query(@Context GraphManager manager,
                          @PathParam("graph") String graph,
                          @Context HugeConfig conf,
                          @Context HttpHeaders headers,
                          @QueryParam("cypher") String cypher) {
        LOG.debug("Graph [{}] query by cypher: {}", graph, cypher);

        return getResponse(graph, headers, cypher);
    }

    private Response getResponse(@PathParam("graph") String graph, @Context HttpHeaders headers, @QueryParam("cypher") String cypher) {
        E.checkArgument(cypher != null && !cypher.isEmpty(),
                "The cypher parameter can't be null or empty");

        TranslationFacade cfog = new TranslationFacade();
        String gremlin = cfog.toGremlinGroovy(cypher);
        gremlin = "g = " + graph + ".traversal()\n" + gremlin;
        String auth = headers.getHeaderString(HttpHeaders.AUTHORIZATION);
        MultivaluedMap<String, String> params = new MultivaluedHashMap<>();
        params.put("gremlin", Arrays.asList(gremlin));
        Response response = this.client().doGetRequest(auth, params);
        return transformResponseIfNeeded(response);
    }

作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: 图数据库查询语言

本文介绍图数据库支持的gremlin和Cypher查询语言。

初始化数据

可使用gremlin api执行

gremlin api

POST http://localhost:8080/gremlin

{"gremlin":"这里是语句",
 "bindings": {},
"language": "gremlin-groovy",
"aliases": {    "graph": "graphname",     "g": "__g_graphname"}
}

schema

schema = hugegraph.schema()

schema.propertyKey("name").asText().ifNotExist().create()
schema.propertyKey("age").asInt().ifNotExist().create()
schema.propertyKey("time").asInt().ifNotExist().create()
schema.propertyKey("reason").asText().ifNotExist().create()
schema.propertyKey("type").asText().ifNotExist().create()

schema.vertexLabel("character").properties("name", "age", "type").primaryKeys("name").nullableKeys("age").ifNotExist().create()
schema.vertexLabel("location").properties("name").primaryKeys("name").ifNotExist().create()

schema.edgeLabel("father").link("character", "character").ifNotExist().create()
schema.edgeLabel("mother").link("character", "character").ifNotExist().create()
schema.edgeLabel("battled").link("character", "character").properties("time").ifNotExist().create()
schema.edgeLabel("lives").link("character", "location").properties("reason").nullableKeys("reason").ifNotExist().create()
schema.edgeLabel("pet").link("character", "character").ifNotExist().create()
schema.edgeLabel("brother").link("character", "character").ifNotExist().create()

插入数据

// add vertices
Vertex saturn = graph.addVertex(T.label, "character", "name", "saturn", "age", 10000, "type", "titan")
Vertex sky = graph.addVertex(T.label, "location", "name", "sky")
Vertex sea = graph.addVertex(T.label, "location", "name", "sea")
Vertex jupiter = graph.addVertex(T.label, "character", "name", "jupiter", "age", 5000, "type", "god")
Vertex neptune = graph.addVertex(T.label, "character", "name", "neptune", "age", 4500, "type", "god")
Vertex hercules = graph.addVertex(T.label, "character", "name", "hercules", "age", 30, "type", "demigod")
Vertex alcmene = graph.addVertex(T.label, "character", "name", "alcmene", "age", 45, "type", "human")
Vertex pluto = graph.addVertex(T.label, "character", "name", "pluto", "age", 4000, "type", "god")
Vertex nemean = graph.addVertex(T.label, "character", "name", "nemean", "type", "monster")
Vertex hydra = graph.addVertex(T.label, "character", "name", "hydra", "type", "monster")
Vertex cerberus = graph.addVertex(T.label, "character", "name", "cerberus", "type", "monster")
Vertex tartarus = graph.addVertex(T.label, "location", "name", "tartarus")

// add edges
jupiter.addEdge("father", saturn)
jupiter.addEdge("lives", sky, "reason", "loves fresh breezes")
jupiter.addEdge("brother", neptune)
jupiter.addEdge("brother", pluto)
neptune.addEdge("lives", sea, "reason", "loves waves")
neptune.addEdge("brother", jupiter)
neptune.addEdge("brother", pluto)
hercules.addEdge("father", jupiter)
hercules.addEdge("mother", alcmene)
hercules.addEdge("battled", nemean, "time", 1)
hercules.addEdge("battled", hydra, "time", 2)
hercules.addEdge("battled", cerberus, "time", 12)
pluto.addEdge("brother", jupiter)
pluto.addEdge("brother", neptune)
pluto.addEdge("lives", tartarus, "reason", "no fear of death")
pluto.addEdge("pet", cerberus)
cerberus.addEdge("lives", tartarus)

可视化

创建索引

创建索引,使用REST API:

POST http://localhost:8080/graphs/hugegraph/schema/indexlabels

{"name": "characterAge","base_type": "VERTEX_LABEL","base_value": "character","index_type": "RANGE","fields": [    "age"]
}

查询

API 说明

支持gremlinsparqlCypher api,推荐gremlin和Cypher

cypher api

http://127.0.0.1:8080/graphs/hugegraph/cypher?cypher=

eg:

curl http://127.0.0.1:8080/graphs/hugegraph/cypher?cypher=MATCH%20(n:character)-[:lives]-%3E(location)-[:lives]-(cohabitants)%20where%20n.name=%27pluto%27%20return%20cohabitants.name

gremlin api

POST http://localhost:8080/gremlin

{"gremlin":"这里是语句",
 "bindings": {},
"language": "gremlin-groovy",
"aliases": {    "graph": "graphname",     "g": "__g_graphname"}
}

sparql api

GET http://127.0.0.1:8080/graphs/hugegraph/sparql?sparql=SELECT%20*%20WHERE%20{%20}

1. 查询hercules的祖父

g.V().hasLabel('character').has('name','hercules').out('father').out('father')

也可以通过repeat方式:

g.V().hasLabel('character').has('name','hercules').repeat(__.out('father')).times(2)

cypher

MATCH (n:character)-[:father]->()-[:father]->(grandfather) where n.name='hercules' return grandfather

2. Find the name of hercules’s father

g.V().hasLabel('character').has('name','hercules').out('father').value('name')

cypher

MATCH (n:character)-[:father]->(father) where n.name='hercules' return father.name

3. Find the characters with age > 100

g.V().hasLabel('character').has('age',gt(100))

cypher

MATCH (n:character) where n.age > 10 return n

4. Find who are pluto’s cohabitants

g.V().hasLabel('character').has('name','pluto').out('lives').in('lives').values('name')

cypher

MATCH (n:character)-[:lives]->(location)-[:lives]-(cohabitants) where n.name='pluto' return cohabitants.name

5. Find pluto can’t be his own cohabitant

pluto = g.V().hasLabel('character').has('name', 'pluto')
g.V(pluto).out('lives').in('lives').where(is(neq(pluto)).values('name')

// use 'as'
g.V().hasLabel('character').has('name', 'pluto').as('x').out('lives').in('lives').where(neq('x')).values('name')




cypher> MATCH (src:character{name:"pluto"})-[:lives]->()<-[:lives]-(dst:character) RETURN dst.name

6. Pluto’s Brothers

pluto = g.V().hasLabel('character').has('name', 'pluto').next()
// where do pluto's brothers live?
g.V(pluto).out('brother').out('lives').values('name')

// which brother lives in which place?
g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place')

// what is the name of the brother and the name of the place?
g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place').by('name')



MATCH (src:Character{name:"pluto"})-[:brother]->(bro:Character)-[:lives]->(dst)
RETURN bro.name, dst.name

作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: axios 浏览器内存泄露问题解决

现象

业务页面,频繁切换下一条,内存飙涨,导致卡顿,之前怀疑是音频播放器的锅,修改后问题依旧,于是排查网络请求。

到axios issues搜索,发现memory leaks帖子不少,典型的在这里Axios doesn’t address memory leaks?:

这里提到0.19.2 版本没有问题,但是升级到0.20.0后,出现问题。

两种解决方案:

  • 降级到0.19.2
  • 在新版本里,不要直接使用axios,而是先创建一个instance
const axios = axios.create({...}) // instead of axios.get(), post(), put() etc.

排查业务代码,发现每次请求都是创建一个 instance,抛开版本问题,每次创建实例肯定会存在内存问题,最好还是先创建个single instance,后面复用:

import axios, { AxiosRequestConfig, AxiosResponse } from 'axios'

// 创建一个实例
const axiosInstance = axios.create() 

  save(parameters: {
    'data': ResultSaveParam,
    $queryParameters?: any,
    $domain?: string
  }): Promise<AxiosResponse<ApiResult>> {
     .... // 使用axiosInstance
    return axiosInstance.request(config)
  }

作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: 给Swagger换一套皮肤 Knife4j集成记录

Swagger有一套经典的UI,但是并不是很好用,之前有看到Knife4j,界面美观、功能完善,因此尝试集成。

demo参考示例地址:knife4j-spring-boot-demo

Knife4j前身是swagger-bootstrap-ui,是一个为Swagger接口文档赋能的工具

根据官方文档,集成非常方便。

maven引用

第一步,是在项目的pom.xml文件中引入knife4j的依赖,如下:

<dependencies>
    <dependency>
        <groupId>com.github.xiaoymin</groupId>
        <artifactId>knife4j-spring-boot-starter</artifactId>
        <version>2.0.6</version>
    </dependency>
</dependencies>

当前最新版是2.0.6

如果你想使用bom的方式引入,请参考Maven Bom方式引用

创建Swagger配置文件

新建Swagger的配置文件SwaggerConfiguration.java文件,创建springfox提供的Docket分组对象,代码如下:

@Configuration
@EnableSwagger2
@EnableKnife4j
@Import(BeanValidatorPluginsConfiguration.class)
public class SwaggerConfiguration {

    @Bean(value = "defaultApi2")
    public Docket defaultApi2() {
        Docket docket=new Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo())
                //分组名称
                .groupName("2.X版本")
                .select()
                //这里指定Controller扫描包路径
                .apis(RequestHandlerSelectors.basePackage("com.swagger.bootstrap.ui.demo.new2"))
                .paths(PathSelectors.any())
                .build();
        return docket;
    }

}

以上有两个注解需要特别说明,如下表:

注解 说明
@EnableSwagger2 该注解是Springfox-swagger框架提供的使用Swagger注解,该注解必须加
@EnableKnife4j 该注解是knife4j提供的增强注解,Ui提供了例如动态参数、参数过滤、接口排序等增强功能,如果你想使用这些增强功能就必须加该注解,否则可以不用加

spring-security 免认证

/**/doc.html/** 加入:

    private String[] getSwaggerUrl() {
        List<String> urls = new ArrayList<String>();
        urls.add("/**/swagger-resources/**");
        urls.add("/**/webjars/**");
        urls.add("/**/doc.html/**");
        urls.add("/**/v2/**");
        urls.add("/**/swagger-ui.html/**");
        return urls.toArray(new String[urls.size()]);
    }http.authorizeRequests() .antMatchers(getSwaggerUrl()).permitAll()

测试访问

在浏览器输入地址:http://host:port/doc.html

可以设置全局参数:

全局参数

支持在线调试

在线调试

离线文档支持导出md、pdf等

导出文档

最后

前端如何更优雅的调用api呢?参考:

Vue 使用typescript, 优雅的调用swagger API

后面有空,可以将这个集成到knife4j


作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: 推荐算法之: DeepFM及使用DeepCTR测试

算法介绍

左边deep network,右边FM,所以叫deepFM

DeepFm

包含两个部分:

  • Part1: FM(Factorization machines),因子分解机部分

FM

在传统的一阶线性回归之上,加了一个二次项,可以表达两两特征的相互关系。

特征相互关系

这里的公式可以简化,减少计算量,下图来至于网络。

FM

  • Part2: Deep部分

deep部分是多层dnn网络。

算法实现

实现部分,用Keras实现一个DeepFM·清尘·《FM、FMM、DeepFM整理(pytorch)》

讲的比较清楚,这里引用keras实现来说明。

整体的网络结构:

网络结构

特征编码

特征可以分为3类:

  • 连续型field,比如数字类型特征
  • 单值离散型特征,比如gender,可选为male、female
  • 多值离散型,比如tag,可以有多个

连续型field,可以拼接到一起,dense数据。

单值,多值field进行Onehot后,可见单值离散field对应的独热向量只有一位取1,而多值离散field对应的独热向量有多于一位取1,表示该field可以同时取多个特征值。

label shop_score gender=m gender=f interest=f interest=c
0 0.2 1 0 1 1
1 0.8 0 1 0 1

FM 部分

FM

看公式:
FM

先算 FM一次项:

  • 连续型field 可以用Dense(1)层实现
  • 单值离散型field 用Embedding(n,1), n是分类中值的个数
  • 多值离散型field可以同时取多个特征值,为了batch training,必须对样本进行补零padding。同样可以用Embedding实现,因为有多个Embedding,可以取下平均值。

1次项

然后计算FM二次项,这里理解比较费劲一点。

·清尘·《FM、FMM、DeepFM整理(pytorch)》 深入浅出的讲明白了这个过程,大家可以参见。

我们来看具体实现方面,这里的DeepFM模型CTR预估理论与实战 讲解更容易理解。

FM公式

假设只有前面的C1和C2两个Category的特征,词典大小还是3和2。假设输入还是C1=2,C2=2(下标从1开始),则Embedding之后为V2=[e21,e22,e23,e24]和V5=[e51,e52,e53,e54]。

因为xi和xj同时不为零才需要计算,所以上面的公式里需要计算的只有i=2和j=5的情况。因此:

FM

扩展到多个,比如C1,C2,C3,需要算内积

怎么用用矩阵乘法一次计算出来呢?我们可以看看这个

对应的代码就是

       square_of_sum = tf.square(reduce_sum(        concated_embeds_value, axis=1, keep_dims=True))    sum_of_square = reduce_sum(        concated_embeds_value * concated_embeds_value, axis=1, keep_dims=True)    cross_term = square_of_sum - sum_of_square    cross_term = 0.5 * reduce_sum(cross_term, axis=2, keep_dims=False)

其中concated_embeds_value是拼接起来的embeds_value。

Deep部分

DNN比较简单,FM的输入和DNN的输入都是同一个group_embedding_dict。

使用movielens 来测试

下载ml-100k 数据集

wget http://files.grouplens.org/datasets/movielens/ml-100k.zip
unzip ml-100k.zip

安装相关软件包,sklearn,deepctr

导入包:

import pandas
import pandas as pd
import sklearn
from sklearn.metrics import log_loss, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
import tensorflow as tf
from tqdm import tqdm

from deepctr.models import DeepFM
from deepctr.feature_column import SparseFeat, VarLenSparseFeat, get_feature_names
import numpy as np

读取评分数据:

u_data = pd.read_csv("ml-100k/u.data", sep='\t', header=None)
u_data.columns = ['user_id', 'movie_id', 'rating', 'timestamp']

有评分的设置为1,随机采用未评分的

def neg_sample(u_data, neg_rate=1):
    # 全局随机采样
    item_ids = u_data['movie_id'].unique()
    print('start neg sample')
    neg_data = []
    # 负采样
    for user_id, hist in tqdm(u_data.groupby('user_id')):
        # 当前用户movie
        rated_movie_list = hist['movie_id'].tolist()
        candidate_set = list(set(item_ids) - set(rated_movie_list))
        neg_list_id = np.random.choice(candidate_set, size=len(rated_movie_list) * neg_rate, replace=True)
        for id in neg_list_id:
            neg_data.append([user_id, id, -1, 0])
    u_data_neg = pd.DataFrame(neg_data)
    u_data_neg.columns = ['user_id', 'movie_id', 'rating', 'timestamp']
    u_data = pandas.concat([u_data, u_data_neg])
    print('end neg sample')
    return u_data

读取item数据

u_item = pd.read_csv("ml-100k/u.item", sep='|', header=None, error_bad_lines=False)
    genres_columns = ['Action', 'Adventure',
                      'Animation',
                      'Children', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy',
                      'Film_Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi',
                      'Thriller', 'War', 'Western']

    u_item.columns = ['movie_id', 'title', 'release_date', 'video_date', 'url', 'unknown'] + genres_columns

处理genres并删除单独的genres列

     genres_list = []
    for index, row in u_item.iterrows():
        genres = []
        for item in genres_columns:
            if row[item]:
                genres.append(item)
        genres_list.append('|'.join(genres))

    u_item['genres'] = genres_list
    for item in genres_columns:
        del u_item[item]

读取用户信息:

  # user id | age | gender | occupation(职业) | zip code(邮编,地区)
    u_user = pd.read_csv("ml-100k/u.user", sep='|', header=None)
    u_user.columns = ['user_id', 'age', 'gender', 'occupation', 'zip']

join到一起:

 data = pandas.merge(u_data, u_item, on="movie_id", how='left')
 data = pandas.merge(data, u_user, on="user_id", how='left')
 data.to_csv('ml-100k/data.csv', index=False)

处理特征:

sparse_features = ["movie_id", "user_id",
                   "gender", "age", "occupation", "zip", ]

data[sparse_features] = data[sparse_features].astype(str)
target = ['rating']

# 评分
data['rating'] = [1 if int(x) >= 0 else 0 for x in data['rating']]

先特征编码:

for feat in sparse_features:    lbe = LabelEncoder()    data[feat] = lbe.fit_transform(data[feat])

处理genres特征,一个movie有多个genres,先拆分,然后编码为数字,注意是从1开始;由于每个movie的genres长度不一样,可以计算最大长度,位数不足的后面补零(pad_sequences,在post补0)

     def split(x):        key_ans = x.split('|')        for key in key_ans:            if key not in key2index:                # Notice : input value 0 is a special "padding",so we do not use 0 to encode valid feature for sequence input                key2index[key] = len(key2index) + 1        return list(map(lambda x: key2index[x], key_ans))

    key2index = {}    genres_list = list(map(split, data['genres'].values))    genres_length = np.array(list(map(len, genres_list)))    max_len = max(genres_length)    # Notice : padding=`post`    genres_list = pad_sequences(genres_list, maxlen=max_len, padding='post', )

构建deepctr的特征列,主要分为两类特征,一是定长的SparseFeat,稀疏的类别特征,二是可变长度的VarLenSparseFeat,像genres这样的包含多个的。

       fixlen_feature_columns = [SparseFeat(feat, data[feat].nunique(), embedding_dim=4)                              for feat in sparse_features]
    use_weighted_sequence = False    if use_weighted_sequence:        varlen_feature_columns = [VarLenSparseFeat(SparseFeat('genres', vocabulary_size=len(            key2index) + 1, embedding_dim=4), maxlen=max_len, combiner='mean',                                                   weight_name='genres_weight')]  # Notice : value 0 is for padding for sequence input feature    else:        varlen_feature_columns = [VarLenSparseFeat(SparseFeat('genres', vocabulary_size=len(            key2index) + 1, embedding_dim=4), maxlen=max_len, combiner='mean',                                                   weight_name=None)]  # Notice : value 0 is for padding for sequence input feature
    linear_feature_columns = fixlen_feature_columns + varlen_feature_columns    dnn_feature_columns = fixlen_feature_columns + varlen_feature_columns
    feature_names = get_feature_names(linear_feature_columns + dnn_feature_columns)

封装训练数据,先shuffle(乱排)数据,然后生成dict input数据。

data = sklearn.utils.shuffle(data)
train_model_input = {name: data[name] for name in sparse_features}  #
train_model_input["genres"] = genres_list

构建DeepFM模型,由于目标值是0,1,因此采用binary,损失函数用binary_crossentropy

model = DeepFM(linear_feature_columns, dnn_feature_columns, task='binary')

model.compile(optimizer=tf.keras.optimizers.Adam(), loss='binary_crossentropy',
              metrics=['AUC', 'Precision', 'Recall'])
model.summary()

训练模型:

model.fit(train_model_input, data[target].values,                    batch_size=256, epochs=20, verbose=2,                    validation_split=0.2            )

开始训练:

Epoch 1/20
625/625 - 3s - loss: 0.5081 - auc: 0.8279 - precision: 0.7419 - recall: 0.7695 - val_loss: 0.4745 - val_auc: 0.8513 - val_precision: 0.7563 - val_recall: 0.7936
Epoch 2/20
625/625 - 2s - loss: 0.4695 - auc: 0.8538 - precision: 0.7494 - recall: 0.8105 - val_loss: 0.4708 - val_auc: 0.8539 - val_precision: 0.7498 - val_recall: 0.8127
Epoch 3/20
625/625 - 2s - loss: 0.4652 - auc: 0.8564 - precision: 0.7513 - recall: 0.8139 - val_loss: 0.4704 - val_auc: 0.8545 - val_precision: 0.7561 - val_recall: 0.8017
Epoch 4/20
625/625 - 2s - loss: 0.4624 - auc: 0.8579 - precision: 0.7516 - recall: 0.8146 - val_loss: 0.4724 - val_auc: 0.8542 - val_precision: 0.7296 - val_recall: 0.8526
Epoch 5/20
625/625 - 2s - loss: 0.4607 - auc: 0.8590 - precision: 0.7521 - recall: 0.8173 - val_loss: 0.4699 - val_auc: 0.8550 - val_precision: 0.7511 - val_recall: 0.8141
Epoch 6/20
625/625 - 2s - loss: 0.4588 - auc: 0.8602 - precision: 0.7545 - recall: 0.8165 - val_loss: 0.4717 - val_auc: 0.8542 - val_precision: 0.7421 - val_recall: 0.8265
Epoch 7/20
625/625 - 2s - loss: 0.4574 - auc: 0.8610 - precision: 0.7535 - recall: 0.8192 - val_loss: 0.4722 - val_auc: 0.8547 - val_precision: 0.7549 - val_recall: 0.8023
Epoch 8/20
625/625 - 2s - loss: 0.4561 - auc: 0.8619 - precision: 0.7543 - recall: 0.8201 - val_loss: 0.4717 - val_auc: 0.8548 - val_precision: 0.7480 - val_recall: 0.8185
Epoch 9/20
625/625 - 2s - loss: 0.4531 - auc: 0.8643 - precision: 0.7573 - recall: 0.8210 - val_loss: 0.4696 - val_auc: 0.8583 - val_precision: 0.7598 - val_recall: 0.8103
Epoch 10/20
625/625 - 2s - loss: 0.4355 - auc: 0.8768 - precision: 0.7787 - recall: 0.8166 - val_loss: 0.4435 - val_auc: 0.8769 - val_precision: 0.7756 - val_recall: 0.8293
Epoch 11/20
625/625 - 2s - loss: 0.4093 - auc: 0.8923 - precision: 0.7915 - recall: 0.8373 - val_loss: 0.4301 - val_auc: 0.8840 - val_precision: 0.7806 - val_recall: 0.8390
Epoch 12/20
625/625 - 2s - loss: 0.3970 - auc: 0.8988 - precision: 0.7953 - recall: 0.8497 - val_loss: 0.4286 - val_auc: 0.8867 - val_precision: 0.7903 - val_recall: 0.8299
Epoch 13/20
625/625 - 2s - loss: 0.3896 - auc: 0.9029 - precision: 0.8001 - recall: 0.8542 - val_loss: 0.4253 - val_auc: 0.8888 - val_precision: 0.7913 - val_recall: 0.8322
Epoch 14/20
625/625 - 2s - loss: 0.3825 - auc: 0.9067 - precision: 0.8038 - recall: 0.8584 - val_loss: 0.4205 - val_auc: 0.8917 - val_precision: 0.7885 - val_recall: 0.8506
Epoch 15/20
625/625 - 2s - loss: 0.3755 - auc: 0.9102 - precision: 0.8074 - recall: 0.8624 - val_loss: 0.4204 - val_auc: 0.8940 - val_precision: 0.7868 - val_recall: 0.8607
Epoch 16/20
625/625 - 2s - loss: 0.3687 - auc: 0.9136 - precision: 0.8117 - recall: 0.8653 - val_loss: 0.4176 - val_auc: 0.8956 - val_precision: 0.8097 - val_recall: 0.8236
Epoch 17/20
625/625 - 2s - loss: 0.3617 - auc: 0.9170 - precision: 0.8155 - recall: 0.8682 - val_loss: 0.4166 - val_auc: 0.8966 - val_precision: 0.8056 - val_recall: 0.8354
Epoch 18/20
625/625 - 2s - loss: 0.3553 - auc: 0.9201 - precision: 0.8188 - recall: 0.8716 - val_loss: 0.4168 - val_auc: 0.8977 - val_precision: 0.7996 - val_recall: 0.8492
Epoch 19/20
625/625 - 2s - loss: 0.3497 - auc: 0.9227 - precision: 0.8214 - recall: 0.8741 - val_loss: 0.4187 - val_auc: 0.8973 - val_precision: 0.8079 - val_recall: 0.8358
Epoch 20/20
625/625 - 2s - loss: 0.3451 - auc: 0.9248 - precision: 0.8244 - recall: 0.8753 - val_loss: 0.4210 - val_auc: 0.8982 - val_precision: 0.7945 - val_recall: 0.8617

最后我们测试下数据:

 pred_ans = model.predict(train_model_input, batch_size=256)
 count = 0
    for (i, j) in zip(pred_ans, data['rating'].values):
        print(i, j)
        count += 1
        if count > 10:
            break

输出如下:

[0.20468083] 0
[0.1988303] 0
[7.7236204e-05] 0
[0.9439401] 1
[0.76648283] 0
[0.80082995] 1
[0.7689271] 0
[0.8515004] 1
[0.93311656] 1
[0.40019292] 0
[0.60735244] 0

参考


作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: 推荐算法之: LFM 推荐算法

LFM介绍

LFM(Funk SVD) 是利用 矩阵分解的推荐算法:

R  = P * Q

其中:

  • P矩阵是User-LF矩阵,即用户和隐含特征矩阵
  • Q矩阵是LF-Item矩阵,即隐含特征和物品的矩阵
  • R:R矩阵是User-Item矩阵,由P*Q得来

见下图:

LFM

R评分举证由于物品和用户数量巨大,且稀疏,因此利用矩阵乘法,转换为 P(n_user * dim) 和 Q (dim*n_count) 两个矩阵,dim 是隐含特征数量。

Tensorflow实现

下载ml-100k 数据集

!wget http://files.grouplens.org/datasets/movielens/ml-100k.zip
!unzip ml-100k.zip



Resolving files.grouplens.org (files.grouplens.org)... 128.101.65.152
Connecting to files.grouplens.org (files.grouplens.org)|128.101.65.152|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4924029 (4.7M) [application/zip]
Saving to: ‘ml-100k.zip’

ml-100k.zip         100%[===================>]   4.70M  16.2MB/s    in 0.3s    

2020-10-12 12:25:07 (16.2 MB/s) - ‘ml-100k.zip’ saved [4924029/4924029]

/bin/bash: uzip: command not found

评分数据在u.data里,分别是 user_id, movie_id, rating, timestamp

!head ml-100k/u.data



186    302    3    891717742
22    377    1    878887116
244    51    2    880606923
166    346    1    886397596
298    474    4    884182806
115    265    2    881171488
253    465    5    891628467
305    451    3    886324817
6    86    3    883603013

读取数据

import os
def read_data(path: str, separator: str):
    data = []
    with open(path, 'r') as f:
        for line in f.readlines():
            values = line.strip().split(separator)
            user_id, movie_id, rating, timestamp = int(values[0]), int(values[1]), int(values[2]), int(values[3])
            data.append((user_id, movie_id, rating, timestamp))
    return data

data = read_data('ml-100k/u.data', '\t')

print(data[0])



(0, 0, 0.6)

拆分训练集和测试集,test_ratio比例为0.3:

data = [(d[0], d[1], d[2]/5.0) for d in data]

# 拆分
test_ratio = 0.3
n_test = int(len(data) * test_ratio)
test_data, train_data = data[:n_test], data[n_test:]

id规整化,从0开始增长

#id 规整
def normalize_id(data):
    new_data = []
    n_user, n_item = 0, 0
    user_id_old2new, item_id_old2new = {}, {}
    for user_id_old, item_id_old, label in data:
        if user_id_old not in user_id_old2new:
            user_id_old2new[user_id_old] = n_user
            n_user += 1
        if item_id_old not in item_id_old2new:
            item_id_old2new[item_id_old] = n_item
            n_item += 1
        new_data.append((user_id_old2new[user_id_old], item_id_old2new[item_id_old], label))
    return new_data, n_user, n_item, user_id_old2new, item_id_old2new

data, n_user, n_item, user_id_old2new, item_id_old2new = normalize_id(data)

查看数据

print(train_data[0:10])
print(test_data[0])
print('n_user',n_user)
print('n_item',n_item)



(196, 242, 0.6)
n_user 943
n_item 1682

准备数据集

import tensorflow as tf

def xy(data):
        user_ids = tf.constant([d[0] for d in data], dtype=tf.int32)
        item_ids = tf.constant([d[1] for d in data], dtype=tf.int32)
        labels = tf.constant([d[2] for d in data], dtype=tf.keras.backend.floatx())
        return {'user_id': user_ids, 'item_id': item_ids}, labels

batch = 128
train_ds = tf.data.Dataset.from_tensor_slices(xy(train_data)).shuffle(len(train_data)).batch(batch)
test_ds = tf.data.Dataset.from_tensor_slices(xy(test_data)).batch(batch)

TF模型

def LFM_model(n_user: int, n_item: int, dim=100, l2=1e-6) -> tf.keras.Model:
    l2 = tf.keras.regularizers.l2(l2)
    user_id = tf.keras.Input(shape=(), name='user_id', dtype=tf.int32)

    user_embedding = tf.keras.layers.Embedding(n_user, dim, embeddings_regularizer=l2)(user_id)
    # (None,dim)
    item_id = tf.keras.Input(shape=(), name='item_id', dtype=tf.int32)
    item_embedding = tf.keras.layers.Embedding(n_item, dim, embeddings_regularizer=l2)(item_id)
    # (None,dim)
    r = user_embedding * item_embedding
    y = tf.reduce_sum(r, axis=1)
    y = tf.where(y < 0., 0., y)
    y = tf.where(y > 1., 1., y)
    y = tf.expand_dims(y, axis=1)
    return tf.keras.Model(inputs=[user_id, item_id], outputs=y)

model = LFM_model(n_user + 1, n_item + 1, 64)

编译模型

model.compile(optimizer="adam", loss=tf.losses.MeanSquaredError(), metrics=['AUC', 'Precision', 'Recall'])
model.summary()




__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
user_id (InputLayer)            [(None,)]            0                                            
__________________________________________________________________________________________________
item_id (InputLayer)            [(None,)]            0                                            
__________________________________________________________________________________________________
embedding_12 (Embedding)        (None, 64)           60416       user_id[0][0]                    
__________________________________________________________________________________________________
embedding_13 (Embedding)        (None, 64)           107712      item_id[0][0]                    
__________________________________________________________________________________________________
tf_op_layer_Mul_6 (TensorFlowOp [(None, 64)]         0           embedding_12[0][0]               
                                                                 embedding_13[0][0]               
__________________________________________________________________________________________________
tf_op_layer_Sum_6 (TensorFlowOp [(None,)]            0           tf_op_layer_Mul_6[0][0]          
__________________________________________________________________________________________________
tf_op_layer_Less_6 (TensorFlowO [(None,)]            0           tf_op_layer_Sum_6[0][0]          
__________________________________________________________________________________________________
tf_op_layer_SelectV2_12 (Tensor [(None,)]            0           tf_op_layer_Less_6[0][0]         
                                                                 tf_op_layer_Sum_6[0][0]          
__________________________________________________________________________________________________
tf_op_layer_Greater_6 (TensorFl [(None,)]            0           tf_op_layer_SelectV2_12[0][0]    
__________________________________________________________________________________________________
tf_op_layer_SelectV2_13 (Tensor [(None,)]            0           tf_op_layer_Greater_6[0][0]      
                                                                 tf_op_layer_SelectV2_12[0][0]    
__________________________________________________________________________________________________
tf_op_layer_ExpandDims_6 (Tenso [(None, 1)]          0           tf_op_layer_SelectV2_13[0][0]    
==================================================================================================
Total params: 168,128
Trainable params: 168,128
Non-trainable params: 0
__________________________________________________________________________________________________

下面开始训练10次epochs

model.fit(train_ds,validation_data=test_ds,epochs=10)



Epoch 1/10

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/indexed_slices.py:432: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
547/547 [==============================] - 2s 4ms/step - loss: 0.4388 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.0594 - val_loss: 0.1439 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.4180
Epoch 2/10
547/547 [==============================] - 2s 4ms/step - loss: 0.0585 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.8171 - val_loss: 0.0486 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8655
Epoch 3/10
547/547 [==============================] - 2s 3ms/step - loss: 0.0393 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.9053 - val_loss: 0.0433 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8982
Epoch 4/10
547/547 [==============================] - 2s 3ms/step - loss: 0.0346 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.9107 - val_loss: 0.0415 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8947
Epoch 5/10
547/547 [==============================] - 2s 4ms/step - loss: 0.0301 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.9071 - val_loss: 0.0410 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8869
Epoch 6/10
547/547 [==============================] - 2s 4ms/step - loss: 0.0257 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.8958 - val_loss: 0.0410 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8849
Epoch 7/10
547/547 [==============================] - 2s 4ms/step - loss: 0.0218 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.8844 - val_loss: 0.0414 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8753
Epoch 8/10
547/547 [==============================] - 2s 4ms/step - loss: 0.0183 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.8719 - val_loss: 0.0425 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8659
Epoch 9/10
547/547 [==============================] - 2s 4ms/step - loss: 0.0153 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.8624 - val_loss: 0.0435 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8620
Epoch 10/10
547/547 [==============================] - 2s 4ms/step - loss: 0.0132 - auc: 0.0000e+00 - precision: 1.0000 - recall: 0.8535 - val_loss: 0.0449 - val_auc: 0.0000e+00 - val_precision: 1.0000 - val_recall: 0.8531

作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: docker 由于iptables导致无法正常启动问题临时解决方案

docker安装新的ca证书后无法正常启动,
表现为/sbin/iptables --wait -t filter -N DOCKER-ISOLATION-STAGE-2 hang住,
日志有报错 xtables contention detected while running ..

解决历程:

  1. 重启大法,先stop再start无果
  2. iptables 重装大法,无果
  3. docker升级到最新版本无果
  4. 确定还是iptables的问题,暂时没时间详细排查,于是docker禁用iptables,先卸载iptables,docker启动设置iptables false

文章作者:jqpeng
原文链接: 阿里开源JDK dragonwell8在容器环境使用

Alibaba Dragonwell 是阿里巴巴的Open JDK 发行版,提供长期支持。 阿里宣传称在阿里生产环境实现了应用。Alibaba Dragonwell兼容 Java SE 标准,因此可以方便切换。

下载

可以到https://github.com/alibaba/dragonwell8/releases?spm=5176.cndragonwell.0.0.4c5a7568DpYPsp 下载,当前的最新版是8.4.4 GA

ali jdk8

阿里也提供了镜像下载地址,可以加速下载:

8.4.4GA

File name 中国大陆 United States
Alibaba_Dragonwell_8.4.4-GA_Linux_x64.tar.gz download download
Alibaba_Dragonwell_8.4.4-GA_source.tar.gz download download
Alibaba_Dragonwell_8.4.4-Experimental_Windows_x64.tar.gz download download
java8-api-8.4.4-javadoc.jar download download
java8-api-8.4.4-sources.jar download download
java8-api-8.4.4.jar download download

SHA256 checksum

使用

wget https://dragonwell.oss-cn-shanghai.aliyuncs.com/8/8.4.4-GA/Alibaba_Dragonwell_8.4.4-GA_Linux_x64.tar.gz
tar zxvf Alibaba_Dragonwell_8.4.4-GA_Linux_x64.tar.gz 
cd jdk8u262-b10/
 bin/java -version

输出:

openjdk version "1.8.0_262"
OpenJDK Runtime Environment (Alibaba Dragonwell 8.4.4) (build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (Alibaba Dragonwell 8.4.4) (build 25.262-b10, mixed mode)

容器环境使用

我们需要准备一个基础镜像包,我们以openjdk:8-jdk-slim为基础包,把jdk替换为dragonwell8

 FROM openjdk:8-jdk-slim
 MAINTAINER jadepeng
 
 RUN sed -i 's#http://deb.debian.org#https://mirrors.163.com#g' /etc/apt/sources.list \
     && apt-get update\
     && apt-get install -y procps unzip curl bash tzdata ttf-dejavu \
     && rm -rf /var/cache/apt/ \
     && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
     && echo "Asia/Shanghai" > /etc/timezone
 
 RUN rm -r -f /usr/local/openjdk-8
 
 ADD jdk8u262-b10  /usr/local/openjdk-8

构建:

docker build -t dragonwell8:8.4.4 .

java应用就可以用dragonwell8:8.4.4 为底包执行了。

dragonwell8 协程使用

dragonwell8的一大特色就是协程(Coroutine )

官方有一个demo:

以一个pingpong 1,000,000次的程序为例,这是一个需要阻塞,切换密集型的应用。

           o      .   _______ _______
              \_ 0     /______//______/|   @_o
                /\_,  /______//______/     /\
               | \    |      ||      |     / |

static final ExecutorService THREAD_POOL = Executors.newCachedThreadPool();

public static void main(String[] args) throws Exception {
    BlockingQueue<Byte> q1 = new LinkedBlockingQueue<>(), q2 = new LinkedBlockingQueue<>();
    THREAD_POOL.submit(() -> pingpong(q2, q1)); // thread A
    Future<?> f = THREAD_POOL.submit(() -> pingpong(q1, q2)); // thread B
    q1.put((byte) 1);
    System.out.println(f.get() + " ms");
}

private static long pingpong(BlockingQueue<Byte> in, BlockingQueue<Byte> out) throws Exception {
    long start = System.currentTimeMillis();
    for (int i = 0; i < 1_000_000; i++) out.put(in.take());
    return System.currentTimeMillis() - start;
}

运行,查看执行时间:

$java PingPong
13212 ms

// 开启Wisp2
$java -XX:+UseWisp2 -XX:ActiveProcessorCount=1 PingPong
882 ms

开启Wisp2后整体运行效率提升了近十多倍,只需要启动时增加参数-XX:+UseWisp2即可。不用修改一行代码,即可享受协程带来的优势。

随后可以通过jstack观察到起来的线程都以协程的方式在运行了。

 - Coroutine [0x7f6d6d60fb20] "Thread-978" #1076 active=1 steal=0 steal_fail=0 preempt=0 park=0/-1
        at java.dyn.CoroutineSupport.unsafeSymmetricYieldTo(CoroutineSupport.java:138)
--
 - Coroutine [0x7f6d6d60f880] "Thread-912" #1009 active=1 steal=0 steal_fail=0 preempt=0 park=0/-1
        at java.dyn.CoroutineSupport.unsafeSymmetricYieldTo(CoroutineSupport.java:138)

...

可以看到最上方的frame上的方法是协程切换,因为线程调用了sleep,yield出了执行权。

开启Wisp2后,Java线程不再简单地映射到内核级线程,而是对应到一个协程,JVM在少量内核线上调度大量协程执行,以减少内核的调度开销,提高web服务器的性能。

在阿里的生产环境,性能指标:

  • 在复杂的业务应用中(tomcat + 大量基于netty的中间件)我们获得了大约10%的性能提升。
  • 在中间件应用(数据库代理,MQ)中我们获得大约20%的性能提升。

更多Wisp的设计实现与API相关信息,请参考Wisp文档


作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: 来,我们一起打造一款代码命名工具

你是否还在为代码命名而纠结不已?

here are only two hard things in Computer Science: cache invalidation and naming things.– Phil Karlton

代码命名

那么如何更好的命名呢? 是否有好的工具可以支持我们命名呢?网上搜索一圈没有发现满意的,于是自己动手丰衣足食,https://jadepeng.gitee.io/code-naming-tool/

使用方法: 打开网页后,在中文输入框中输入 中文命名,然后回车即可。也可以直接在英文输入框输入英文,搜索候选。

现有的工具

unbug.github.io/codelf/ 提供了一个选择,作者先调用有道、百度等翻译,然后调用searchcode搜索代码,从搜索的代码中提取变量名。

codeif

界面做的很酷,但是推荐出来的变量名称质量参差不齐,失去了参考意义。

新的思路

我们常说以史为鉴,换一个思路,我们可以从优秀的开源库中去吸收他们命名的经验,看看他们是如何命名的,来供我们参考。

实现思路:

  1. 从spring、apache等代码库,读取变量、方法、类名称
  2. 根据关键词匹配出候选命名
  3. 候选结果排序

工具

获取优秀命名

要获取命名,首先想到的是读取代码库,需要先下载代码,然后解析 ———— 工作量巨大,PASS。

那怎么做呢,换个角度,可以通过java的反射来实现。

首先添加一个辅助库:

<dependency>
            <groupId>org.reflections</groupId>
            <artifactId>reflections</artifactId>
            <version>0.9.12</version>
        </dependency>

然后初始化Reflections,FilterBuilder可以用来过滤类库,我们设置”org”,”javax”,”com”,”io”, 基本上囊库了主要的开源类库,比如spring,apache等.

 List<ClassLoader> classLoadersList = new LinkedList<ClassLoader>();
        classLoadersList.add(ClasspathHelper.contextClassLoader());
        classLoadersList.add(ClasspathHelper.staticClassLoader());

        Reflections reflections = new Reflections(new ConfigurationBuilder()
                .setScanners(new SubTypesScanner(false), new ResourcesScanner())
                .setUrls(ClasspathHelper.forClassLoader(classLoadersList.toArray(new ClassLoader[0])))
                .filterInputsBy(new FilterBuilder().includePackage("org","javax","com","io")));

然后,可以通过reflections.getSubTypesOf(Object.class);来获取相关的class了,注意,我们初始化一个 Map<String, Integer> name2count = new HashMap<String, Integer>();用来存储代码命名以及对应的出现次数。

Set<Class<? extends Object>> allClasses =
                reflections.getSubTypesOf(Object.class);
        Map<String, Integer> name2count = new HashMap<String, Integer>();
        for (Class<?> clazz : allClasses) {
            System.out.println(clazz.getName());
            try {
                appendToNameMap(name2count, clazz.getSimpleName());

                Field[] fields = clazz.getDeclaredFields();
                for (Field field : fields) {
                    String name = field.getName();
                    appendToNameMap(name2count, name);
                }
                Method[] methods = clazz.getMethods();
                for (Method method : methods) {
                    String name = method.getName();
                    appendToNameMap(name2count, name);
                    // parameters
                    Parameter[] parameters =  method.getParameters();
                    for (Parameter param : parameters) {
                        name = param.getName();
                        appendToNameMap(name2count, name);
                    }
                }
            }catch(Throwable t)
            { }

其中appendToNameMap:

 private static void appendToNameMap(Map<String, Integer> name2count, String name) {
        // filter
        if(name.contains("-") || name.contains("_")|| name.contains("$")){
            return;
        }

        if (!name2count.containsKey(name)) {
            name2count.put(name, 1);
        } else {
            name2count.put(name, name2count.get(name) +1);
        }
    }

最后把结果存储到文件,作为我们的资源。

FileUtils.writeAllText(JSON.toJSONString(name2count), new File("name2count.txt"));

可以到https://gitee.com/jadepeng/code-naming-tool/blob/master/vars.js查看结果。

命名推荐

命名推荐,还是遵循,先翻译,然后根据翻译结果搜索并召回。

其中翻译直接调用网易有道的,但是搜索如何搞定呢?

最简单的方法,肯定是分词,然后建立索引,lucene是标配。但是上lucene就要上服务器,PASS!

我们来找一个浏览器端的lucene,google 后选定flexsearch.

flexsearch

flexsearch github上有6.5k star,因此优先选择。

下面来看具体的实现。

建立索引

初始化FlexSearch,然后将之前获取的代码命名建立索引。

 var index = new FlexSearch({
            encode: "advanced",
            tokenize: "reverse",
            suggest: true,
            cache: true
        })
        var data = []
        var i = 0
        for (var name in names) {
            var indexName = name.replace(/([A-Z])/g, " $1")
            data[i] = {
                "name": name,
                "count": names[name]
            }
            index.add(i++, indexName)
        }

这里有个小技巧,name.replace(/([A-Z])/g, " $1") 可以将驼峰命名拆分成单词。
同时data数组会保存所有的命名和响应的出现次数。

搜索候选

先翻译,然后将翻译结果给FlexSearch搜索。

function searchFromIndex(value) {
        var results = index.search(value, 25)
    
        results = results.map(function (i) {
            return data[i]
        })
    
        results = results.sort(function (a, b) {
            return b.count - a.count
        })
        return results
    }

先搜索,出来的结果是data中的index序号,转换为list对象,然后按照count倒排。

tips: 理论上,翻译的结果可以去除一些停用词,搜索效果应该更好,这里先放着。

显示结果

对结果进行格式化:

function formatSuggestion(item){
    return `${item.name} <span class='tips'>代码库共出现${item.count}次 (相关搜索: <a target='_blank' href='https://unbug.github.io/codelf/#${item.name}'>codelf</a> &nbsp; <a target='_blank' href='https://searchcode.com/?q=${item.name}&lan=23'>searchcode</a>)</span>`;
}

增加到codelf 和 searchcode的链接,显示结果如下:

搜索结果

开源地址

命名工具地址: https://jadepeng.gitee.io/code-naming-tool/

欢迎大家体验使用,欢迎fork并贡献代码。

后续展望

当前仅仅用到了翻译+搜索,还有很多可以优化的地方:

  • 搜索去停用词
  • 从文本语义相似度层面去推荐
  • 专业术语支持

作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。