为什么要有索引

gremlin 其实是一个逐级过滤的运行机制,比如下面的一个简单的gremlin查询语句:

1
g.V().hasLabel("label").has("prop","value")

运行原理就是:

  • 找出所有的顶点V
  • 然后过滤出label为label的数据
  • 然后过滤出prop=value的数据

当数据量很大时,这个代价非常大,因此需要做查询优化。

hugegraph 的优化方案是,HugeGraphStepStrategy 中将has条件提取出来,然后走索引优化,减少读取的数据量。

TraversalUtil.extractHasContainer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public static void extractHasContainer(HugeGraphStep<?, ?> newStep,
Traversal.Admin<?, ?> traversal) {
Step<?, ?> step = newStep;
do {
step = step.getNextStep();
if (step instanceof HasStep) {
HasContainerHolder holder = (HasContainerHolder) step;
for (HasContainer has : holder.getHasContainers()) {
if (!GraphStep.processHasContainerIds(newStep, has)) {
newStep.addHasContainer(has);
}
}
TraversalHelper.copyLabels(step, step.getPreviousStep(), false);
traversal.removeStep(step);
}
} while (step instanceof HasStep || step instanceof NoOpBarrierStep);
}

hugegraph索引介绍

hugegraph 通过IndexLabel 来定义索引类型,描述索引的约束信息。

  • indexType: 建立的索引类型,目前支持五种,即 Secondary、Range、Search、Shard 和 Unique。

    • Secondary 支持精确匹配的二级索引,允许建立联合索引,联合索引支持索引前缀搜索

      • 单个属性,支持相等查询,比如:person顶点的city属性的二级索引,可以用g.V().has("city", "北京")查询”city属性值是北京”的全部顶点

      • 联合索引,支持前缀查询和相等查询,比如:person顶点的city和street属性的联合索引,可以用g.V().has ("city", "北京").has('street', '中关村街道')查询”city属性值是北京且street属性值是中关村”的全部顶点,或者g.V() .has("city", "北京")查询”city属性值是北京”的全部顶点

        secondary index的查询都是基于”是”或者”相等”的查询条件,不支持”部分匹配”

    • Range 支持数值类型的范围查询

      • 必须是单个数字或者日期属性,比如:person顶点的age属性的范围索引,可以用g.V().has("age", P.gt(18))查询”age属性值大于18”的顶点。除了P.gt()以外,还支持P.gte()P.lte()P.lt()P.eq()P.between()P.inside()P.outside()
    • Search 支持全文检索的索引

      • 必须是单个文本属性,比如:person顶点的address属性的全文索引,可以用g.V().has("address", Text .contains('大厦')查询”address属性中包含大厦”的全部顶点

        search index的查询是基于”是”或者”包含”的查询条件

    • Shard 支持前缀匹配 + 数字范围查询的索引

      • N个属性的分片索引,支持前缀相等情况下的范围查询,比如:person顶点的city和age属性的分片索引,可以用g.V().has ("city", "北京").has("age", P.between(18, 30))查询”city属性是北京且年龄大于等于18小于30”的全部顶点

      • shard index N个属性全是文本属性时,等价于secondary index

      • shard index只有单个数字或者日期属性时,等价于range index

        shard index可以有任意数字或者日期属性,但是查询时最多只能提供一个范围查找条件,且该范围查找条件的属性的前缀属性都是相等查询条件

    • Unique 支持属性值唯一性约束,即可以限定属性的值不重复,允许联合索引,但不支持查询

      • 单个或者多个属性的唯一性索引,不可用来查询,只可对属性的值进行限定,当出现重复值时将报错

摘录自 https://hugegraph.github.io/hugegraph-doc/clients/hugegraph-client.html

SecondaryRange是最常用的索引。

索引存储原理

我们通过源代码来分析索引存储过程。 核心代码在GraphIndexTransaction.updateIndex函数里:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
/**
* Update index(user properties) of vertex or edge
* @param ilId the id of index label
* @param element the properties owner
* @param removed remove or add index
*/
protected void updateIndex(Id ilId, HugeElement element, boolean removed) {
SchemaTransaction schema = this.params().schemaTransaction();
IndexLabel indexLabel = schema.getIndexLabel(ilId);
E.checkArgument(indexLabel != null,
"Not exist index label with id '%s'", ilId);

// Collect property values of index fields
List<Object> allPropValues = new ArrayList<>();
int fieldsNum = indexLabel.indexFields().size();
int firstNullField = fieldsNum;
for (Id fieldId : indexLabel.indexFields()) {
HugeProperty<Object> property = element.getProperty(fieldId);
if (property == null) {
E.checkState(hasNullableProp(element, fieldId),
"Non-null property '%s' is null for '%s'",
this.graph().propertyKey(fieldId) , element);
if (firstNullField == fieldsNum) {
firstNullField = allPropValues.size();
}
allPropValues.add(INDEX_SYM_NULL);
} else {
E.checkArgument(!INDEX_SYM_NULL.equals(property.value()),
"Illegal value of index property: '%s'",
INDEX_SYM_NULL);
allPropValues.add(property.value());
}
}

if (firstNullField == 0 && !indexLabel.indexType().isUnique()) {
// The property value of first index field is null
return;
}
// Not build index for record with nullable field (except unique index)
List<Object> propValues = allPropValues.subList(0, firstNullField);

// Expired time
long expiredTime = element.expiredTime();

// Update index for each index type
switch (indexLabel.indexType()) {
case RANGE_INT:
case RANGE_FLOAT:
case RANGE_LONG:
case RANGE_DOUBLE:
E.checkState(propValues.size() == 1,
"Expect only one property in range index");
Object value = NumericUtil.convertToNumber(propValues.get(0));
this.updateIndex(indexLabel, value, element.id(),
expiredTime, removed);
break;
case SEARCH:
E.checkState(propValues.size() == 1,
"Expect only one property in search index");
value = propValues.get(0);
Set<String> words = this.segmentWords(value.toString());
for (String word : words) {
this.updateIndex(indexLabel, word, element.id(),
expiredTime, removed);
}
break;
case SECONDARY:
// Secondary index maybe include multi prefix index
for (int i = 0, n = propValues.size(); i < n; i++) {
List<Object> prefixValues = propValues.subList(0, i + 1);
// prefixValues is list or set , should create index for
// each item
if(prefixValues.get(0) instanceof Collection) {
for (Object propValue :
(Collection<Object>) prefixValues.get(0)) {
value = escapeIndexValueIfNeeded(propValue.toString());
this.updateIndex(indexLabel, value, element.id(),
expiredTime, removed);
}
}else {
value = ConditionQuery.concatValues(prefixValues);
value = escapeIndexValueIfNeeded((String) value);
this.updateIndex(indexLabel, value, element.id(),
expiredTime, removed);
}
}
break;
case SHARD:
value = ConditionQuery.concatValues(propValues);
value = escapeIndexValueIfNeeded((String) value);
this.updateIndex(indexLabel, value, element.id(),
expiredTime, removed);
break;
case UNIQUE:
value = ConditionQuery.concatValues(allPropValues);
assert !value.equals("");
Id id = element.id();
// TODO: add lock for updating unique index
if (!removed && this.existUniqueValue(indexLabel, value, id)) {
throw new IllegalArgumentException(String.format(
"Unique constraint %s conflict is found for %s",
indexLabel, element));
}
this.updateIndex(indexLabel, value, element.id(),
expiredTime, removed);
break;
default:
throw new AssertionError(String.format(
"Unknown index type '%s'", indexLabel.indexType()));
}
}
  • 参数是索引id,数据HugeElement
  • 先schema.getIndexLabel(ilId),根据索引id获取到indexlabel
  • 然后根据indexlabel中的字段获取element中的属性值
  • 然后根据switch索引类型,来处理索引。

当用户的查询语义是:某属性值大于、小于、大于等于、小于等于、等于某个界限,或者属性值属于某个区间时,适合使用范围索引。比如:“年龄”、“价格”、“得分”等取值比较连续的属性。

范围索引处理方式如下:

  • 先检查属性值个数是否为1,范围索引不支持组合索引。
  • 然后updateIndex,保存索引
1
2
3
4
5
E.checkState(propValues.size() == 1,
"Expect only one property in range index");
Object value = NumericUtil.convertToNumber(propValues.get(0));
this.updateIndex(indexLabel, value, element.id(),
expiredTime, removed);

updateIndex 代码:

1
2
3
4
5
6
7
8
9
10
11
12
private void updateIndex(IndexLabel indexLabel, Object propValue,
Id elementId, long expiredTime, boolean removed) {
HugeIndex index = new HugeIndex(this.graph(), indexLabel);
index.fieldValues(propValue);
index.elementIds(elementId, expiredTime);

if (removed) {
this.doEliminate(this.serializer.writeIndex(index));
} else {
this.doAppend(this.serializer.writeIndex(index));
}
}
  • 构造索引,根据removed来决定是append还是删除。
  • 通过GraphSerializer序列化索引

这里我们来探索Serializer是如何做的,比如Binary:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Id id = index.id();
HugeType type = index.type();
byte[] value = null;
if (!type.isNumericIndex() && indexIdLengthExceedLimit(id)) {
id = index.hashId();
// Save field-values as column value if the key is a hash string
value = StringEncoding.encode(index.fieldValues().toString());
}

entry = newBackendEntry(type, id);
entry.column(this.formatIndexName(index), value);
entry.subId(index.elementId());

if (index.hasTtl()) {
entry.ttl(index.ttl());
}
  • 生成一个BackendEntry,id为索引id
  • column name 通过formatIndexName生成, value 一般为null
  • subId为elementid

索引的id:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public static Id formatIndexId(HugeType type, Id indexLabelId,
Object fieldValues) {
if (type.isStringIndex()) {
String value = "";
if (fieldValues instanceof Id) {
value = IdGenerator.asStoredString((Id) fieldValues);
} else if (fieldValues != null) {
value = fieldValues.toString();
}
/*
* Modify order between index label and field-values to put the
* index label in front(hugegraph-1317)
*/
String strIndexLabelId = IdGenerator.asStoredString(indexLabelId);
return SplicingIdGenerator.splicing(strIndexLabelId, value);
} else {
assert type.isRangeIndex();
int length = type.isRange4Index() ? 4 : 8;
BytesBuffer buffer = BytesBuffer.allocate(4 + length);
buffer.writeInt(SchemaElement.schemaId(indexLabelId));
if (fieldValues != null) {
E.checkState(fieldValues instanceof Number,
"Field value of range index must be number:" +
" %s", fieldValues.getClass().getSimpleName());
byte[] bytes = number2bytes((Number) fieldValues);
buffer.write(bytes);
}
return buffer.asId();
}
}
  • 如果是rangeindex,id为 SchemaElement.schemaId(indexLabelId) + fieldValues
  • 如果是字符串索引,id为 indexLabelId:fieldValues 拼接为字符串 (SplicingIdGenerator.splicing()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
protected byte[] formatIndexName(HugeIndex index) {
BytesBuffer buffer;
Id elemId = index.elementId();
if (!this.indexWithIdPrefix) {
int idLen = 1 + elemId.length();
buffer = BytesBuffer.allocate(idLen);
} else {
Id indexId = index.id();
HugeType type = index.type();
if (!type.isNumericIndex() && indexIdLengthExceedLimit(indexId)) {
indexId = index.hashId();
}
int idLen = 1 + elemId.length() + 1 + indexId.length();
buffer = BytesBuffer.allocate(idLen);
// Write index-id
buffer.writeIndexId(indexId, type);
}
// Write element-id
buffer.writeId(elemId);
// Write expired time if needed
if (index.hasTtl()) {
buffer.writeVLong(index.expiredTime());
}

return buffer.bytes();
}

formatIndexName 决定了column name:

  • 先写入indexId,也就是上面(formatIndexId)生成的index id
  • 再写入elemId

最后写入存储后端时,

1
2
3
4
5
6
7
8
@Override
public void insert(Session session, BackendEntry entry) {
assert !entry.columns().isEmpty();
for (BackendColumn col : entry.columns()) {
assert entry.belongToMe(col) : entry;
session.put(this.table(), col.name, col.value);
}
}

对于range 索引,key的前缀是Int的indexLabelId,中间是索引值的bytes,后缀是elementid,因此range索引天然是有序的。

存储结构:

index_label_id | field_values | element_ids

对于二级索引,也是:

 indexLabelId | fieldValues | element_ids
  • field_values: 属性的值,可以是单个属性,也可以是多个属性拼接而成
  • index_label_id: 索引标签的Id
  • element_ids: 顶点或边的Id

索引查询过程分析

查询要从GraphTransaction的query开始分析,针对ConditionQuery条件查询,会调用optimizeQueries优化查询。

1
2
3
4
5
6
7
8
9
10
11
12
13

public QueryResults<BackendEntry> query(Query query) {
if (!(query instanceof ConditionQuery)) {
LOG.debug("Query{final:{}}", query);
return super.query(query);
}

QueryList<BackendEntry> queries = this.optimizeQueries(query,
super::query);
LOG.debug("{}", queries);
return queries.empty() ? QueryResults.empty() :
queries.fetch(this.pageSize);
}

optimizeQueries 会将condtion query flatten展开(比如in查询,展开成多个查询),然后针对每个cq做查询。

针对每个cq,会调用indexQuery走索引查询。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
protected <R> QueryList<R> optimizeQueries(Query query,
QueryResults.Fetcher<R> fetcher) {
QueryList<R> queries = new QueryList<>(query, fetcher);
for (ConditionQuery cq: ConditionQueryFlatten.flatten(
(ConditionQuery) query)) {
// Optimize by sysprop
Query q = this.optimizeQuery(cq);
/*
* NOTE: There are two possibilities for this query:
* 1.sysprop-query, which would not be empty.
* 2.index-query result(ids after optimization), which may be empty.
*/
if (q == null) {
queries.add(this.indexQuery(cq), this.batchSize);
} else if (!q.empty()) {
queries.add(q);
}
}
return queries;
}

索引查询,核心代码在 GraphIndexTransaction.queryIndex

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
@Watched(prefix = "index")
public IdHolderList queryIndex(ConditionQuery query) {
// Index query must have been flattened in Graph tx
query.checkFlattened();

// NOTE: Currently we can't support filter changes in memory
if (this.hasUpdate()) {
throw new HugeException("Can't do index query when " +
"there are changes in transaction");
}

// Can't query by index and by non-label sysprop at the same time
List<Condition> conds = query.syspropConditions();
if (conds.size() > 1 ||
(conds.size() == 1 && !query.containsCondition(HugeKeys.LABEL))) {
throw new HugeException("Can't do index query with %s and %s",
conds, query.userpropConditions());
}

// Query by index
query.optimized(OptimizedType.INDEX);
if (query.allSysprop() && conds.size() == 1 &&
query.containsCondition(HugeKeys.LABEL)) {
// Query only by label
return this.queryByLabel(query);
} else {
// Query by userprops (or userprops + label)
return this.queryByUserprop(query);
}
}

会先做一些检查,然后判断是否有属性条件,如果没有则直接查询对应label,否则走queryByUserprop,根据属性值查询结果。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
@Watched(prefix = "index")
private IdHolderList queryByUserprop(ConditionQuery query) {
// Get user applied label or collect all qualified labels with
// related index labels
Set<MatchedIndex> indexes = this.collectMatchedIndexes(query);
if (indexes.isEmpty()) {
Id label = query.condition(HugeKeys.LABEL);
throw noIndexException(this.graph(), query, label);
}

// Value type of Condition not matched
boolean paging = query.paging();
if (!validQueryConditionValues(this.graph(), query)) {
return IdHolderList.empty(paging);
}

// Do index query
IdHolderList holders = new IdHolderList(paging);
for (MatchedIndex index : indexes) {
for (IndexLabel il : index.indexLabels()) {
validateIndexLabel(il);
}
if (paging && index.indexLabels().size() > 1) {
throw new NotSupportException("joint index query in paging");
}

if (index.containsSearchIndex()) {
// Do search-index query
holders.addAll(this.doSearchIndex(query, index));
} else {
// Do secondary-index, range-index or shard-index query
IndexQueries queries = index.constructIndexQueries(query);
assert !paging || queries.size() <= 1;
IdHolder holder = this.doSingleOrJointIndex(queries);
holders.add(holder);
}

/*
* NOTE: need to skip the offset if offset > 0, but can't handle
* it here because the query may a sub-query after flatten,
* so the offset will be handle in QueryList.IndexQuery
*
* TODO: finish early here if records exceeds required limit with
* FixedIdHolder.
*/
}
return holders;
}

queryByUserprop 会先查询出匹配的索引(collectMatchedIndexes),如果没匹配到索引,就会报错。

如果匹配到多个索引,依次查询,如果是search索引,走doSearchIndex,反之先constructIndexQueries,然后doSingleOrJointIndex。

搜索索引

搜索索引,之所以特殊处理,因为要分词:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@Watched(prefix = "index")
private IdHolderList doSearchIndex(ConditionQuery query,
MatchedIndex index) {
query = this.constructSearchQuery(query, index);
// Sorted by matched count
IdHolderList holders = new SortByCountIdHolderList(query.paging());
List<ConditionQuery> flatten = ConditionQueryFlatten.flatten(query);
for (ConditionQuery q : flatten) {
if (!q.noLimit() && flatten.size() > 1) {
// Increase limit for union operation
increaseLimit(q);
}
IndexQueries queries = index.constructIndexQueries(q);
assert !query.paging() || queries.size() <= 1;
IdHolder holder = this.doSingleOrJointIndex(queries);
// NOTE: ids will be merged into one IdHolder if not in paging
holders.add(holder);
}
return holders;
}
  • 先构造查询,然后组合结果
  • 重点是如何构造查询的
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
private ConditionQuery constructSearchQuery(ConditionQuery query,
MatchedIndex index) {
ConditionQuery originQuery = query;
Set<Id> indexFields = new HashSet<>();
// Convert has(key, text) to has(key, textContainsAny(word1, word2))
for (IndexLabel il : index.indexLabels()) {
if (il.indexType() != IndexType.SEARCH) {
continue;
}
Id indexField = il.indexField();
String fieldValue = (String) query.userpropValue(indexField);
Set<String> words = this.segmentWords(fieldValue);
indexFields.add(indexField);

query = query.copy();
query.unsetCondition(indexField);
query.query(Condition.textContainsAny(indexField, words));
}

// Register results filter to compare property value and search text
query.registerResultsFilter(elem -> {
for (Condition cond : originQuery.conditions()) {
Object key = cond.isRelation() ? ((Relation) cond).key() : null;
if (key instanceof Id && indexFields.contains(key)) {
// This is an index field of search index
Id field = (Id) key;
assert elem != null;
HugeProperty<?> property = elem.getProperty(field);
String propValue = propertyValueToString(property.value());
String fieldValue = (String) originQuery.userpropValue(field);
if (this.matchSearchIndexWords(propValue, fieldValue)) {
continue;
}
return false;
}
if (!cond.test(elem)) {
return false;
}
}
return true;
});

return query;
}
  • 先分词
  • 然后resetquery,Convert has(key, text) to has(key, textContainsAny(word1, word2))
  • 最后,索引查询可能匹配到多个结果,registerResultsFilter 注册一个结果过滤器,对结果做过滤

普通索引

普通索引,也是先构造索引查询:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
ublic IndexQueries constructIndexQueries(ConditionQuery query) {
// Condition query => Index Queries
if (this.indexLabels().size() == 1) {
/*
* Query by single index or composite index
*/
IndexLabel il = this.indexLabels().iterator().next();
ConditionQuery indexQuery = constructQuery(query, il);
assert indexQuery != null;
return IndexQueries.of(il, indexQuery);
} else {
/*
* Query by joint indexes
*/
IndexQueries queries = buildJointIndexesQueries(query, this);
assert !queries.isEmpty();
return queries;
}
}

如果只匹配到一个索引,直接走这个索引,最简单的情况,

如果匹配到多个索引,这个时候要走联合查询了(buildJointIndexesQueries)

最后,通过doSingleOrJointIndex来获取结果:

1
2
3
4
5
6
7
8
@Watched(prefix = "index")
private IdHolder doSingleOrJointIndex(IndexQueries queries) {
if (queries.size() == 1) {
return this.doSingleOrCompositeIndex(queries);
} else {
return this.doJointIndex(queries);
}
}

如果queries.size > 1,代表要走联合索引。但是一般db一次查询通常直走一个索引,hugegraph也差不多:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
@Watched(prefix = "index")
private IdHolder doJointIndex(IndexQueries queries) {
if (queries.oomRisk()) {
LOG.warn("There is OOM risk if the joint operation is based on a " +
"large amount of data, please use single index + filter " +
"instead of joint index: {}", queries.rootQuery());
}
// All queries are joined with AND
Set<Id> intersectIds = null;
boolean filtering = false;
IdHolder resultHolder = null;
for (Map.Entry<IndexLabel, ConditionQuery> e : queries.entrySet()) {
IndexLabel indexLabel = e.getKey();
ConditionQuery query = e.getValue();
assert !query.paging();
if (!query.noLimit() && queries.size() > 1) {
// Unset limit for intersection operation
query.limit(Query.NO_LIMIT);
}
/*
* Try to query by joint indexes:
* 1 If there is any index exceeded the threshold, transform into
* partial index query, then filter after back-table.
* 1.1 Return the holder of the first index that not exceeded the
* threshold if there exists one index, this holder will be used
* as the only query condition.
* 1.2 Return the holder of the first index if all indexes exceeded
* the threshold.
* 2 Else intersect holders for all indexes, and return intersection
* ids of all indexes.
*/
IdHolder holder = this.doIndexQuery(indexLabel, query);
if (resultHolder == null) {
resultHolder = holder;
}
assert this.indexIntersectThresh > 0; // default value is 1000
Set<Id> ids = ((BatchIdHolder) holder).peekNext(
this.indexIntersectThresh).ids();
if (ids.size() >= this.indexIntersectThresh) {
// Transform into filtering
filtering = true;
query.optimized(OptimizedType.INDEX_FILTER);
} else if (filtering) {
assert ids.size() < this.indexIntersectThresh;
resultHolder = holder;
break;
} else {
if (intersectIds == null) {
intersectIds = ids;
} else {
CollectionUtil.intersectWithModify(intersectIds, ids);
}
if (intersectIds.isEmpty()) {
break;
}
}
}

if (filtering) {
return resultHolder;
} else {
assert intersectIds != null;
return new FixedIdHolder(queries.asJointQuery(), intersectIds);
}
}
  • 依次读取,先读取indexIntersectThresh 个数的匹配索引id,indexIntersectThresh用来控制1次读取索引id的个数,这个默认是1000,
  • 如果地个数》=indexIntersectThresh,这个时候hugegraph认为匹配结果数太多了,不能直接走索引查询到结果,需要走过滤(OptimizedType.INDEX_FILTER),也就是读取可能的候选结果,然后通过查询条件过滤结果。
  • 如果有一个索引较小,resultHolder缓存较小索引的
  • 如果几个索引都小于indexIntersectThresh,这是最理想情况,直接取ids的交集(CollectionUtil.intersectWithModify)

读取到id后,就是根据id读取结果,过滤结果了。

如何通过索引读取到匹配的id?

关键代码在AbstractTransaction:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
@Watched(prefix = "tx")
public QueryResults<BackendEntry> query(Query query) {
LOG.debug("Transaction query: {}", query);
/*
* NOTE: it's dangerous if an IdQuery/ConditionQuery is empty
* check if the query is empty and its class is not the Query itself
*/
if (query.empty() && !query.getClass().equals(Query.class)) {
throw new BackendException("Query without any id or condition");
}

Query squery = this.serializer.writeQuery(query);

// Do rate limit if needed
RateLimiter rateLimiter = this.graph.readRateLimiter();
if (rateLimiter != null && query.resultType().isGraph()) {
double time = rateLimiter.acquire(1);
if (time > 0) {
LOG.debug("Waited for {}s to query", time);
}
BackendEntryIterator.checkInterrupted();
}

this.beforeRead();
try {
return new QueryResults<>(this.store.query(squery), query);
} finally {
this.afterRead(); // TODO: not complete the iteration currently
}
}

逐级往下,核心代码在writeQueryCondition:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@Override
protected Query writeQueryCondition(Query query) {
HugeType type = query.resultType();
if (!type.isIndex()) {
return query;
}

ConditionQuery cq = (ConditionQuery) query;

if (type.isNumericIndex()) {
// Convert range-index/shard-index query to id range query
return this.writeRangeIndexQuery(cq);
} else {
assert type.isSearchIndex() || type.isSecondaryIndex() ||
type.isUniqueIndex();
// Convert secondary-index or search-index query to id query
return this.writeStringIndexQuery(cq);
}
}

如果是rangeindex 索引,会转换为scan indexlabelid:start - indexlabelid:end 的查询

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
private Query writeRangeIndexQuery(ConditionQuery query) {
Id index = query.condition(HugeKeys.INDEX_LABEL_ID);
E.checkArgument(index != null, "Please specify the index label");

List<Condition> fields = query.syspropConditions(HugeKeys.FIELD_VALUES);
E.checkArgument(!fields.isEmpty(),
"Please specify the index field values");

HugeType type = query.resultType();
Id start = null;
if (query.paging() && !query.page().isEmpty()) {
byte[] position = PageState.fromString(query.page()).position();
start = new BinaryId(position, null);
}

RangeConditions range = new RangeConditions(fields);
if (range.keyEq() != null) {
Id id = formatIndexId(type, index, range.keyEq(), true);
if (start == null) {
return new IdPrefixQuery(query, id);
}
E.checkArgument(Bytes.compare(start.asBytes(), id.asBytes()) >= 0,
"Invalid page out of lower bound");
return new IdPrefixQuery(query, start, id);
}

Object keyMin = range.keyMin();
Object keyMax = range.keyMax();
boolean keyMinEq = range.keyMinEq();
boolean keyMaxEq = range.keyMaxEq();
if (keyMin == null) {
E.checkArgument(keyMax != null,
"Please specify at least one condition");
// Set keyMin to min value
keyMin = NumericUtil.minValueOf(keyMax.getClass());
keyMinEq = true;
}

Id min = formatIndexId(type, index, keyMin, false);
if (!keyMinEq) {
/*
* Increase 1 to keyMin, index GT query is a scan with GT prefix,
* inclusiveStart=false will also match index started with keyMin
*/
increaseOne(min.asBytes());
keyMinEq = true;
}

if (start == null) {
start = min;
} else {
E.checkArgument(Bytes.compare(start.asBytes(), min.asBytes()) >= 0,
"Invalid page out of lower bound");
}

if (keyMax == null) {
keyMax = NumericUtil.maxValueOf(keyMin.getClass());
keyMaxEq = true;
}
Id max = formatIndexId(type, index, keyMax, false);
if (keyMaxEq) {
keyMaxEq = false;
increaseOne(max.asBytes());
}
return new IdRangeQuery(query, start, keyMinEq, max, keyMaxEq);
}

如果是其他索引,则转换为前缀匹配查询:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
private Query writeStringIndexQuery(ConditionQuery query) {
E.checkArgument(query.allSysprop() &&
query.conditions().size() == 2,
"There should be two conditions: " +
"INDEX_LABEL_ID and FIELD_VALUES" +
"in secondary index query");

Id index = query.condition(HugeKeys.INDEX_LABEL_ID);
Object key = query.condition(HugeKeys.FIELD_VALUES);

E.checkArgument(index != null, "Please specify the index label");
E.checkArgument(key != null, "Please specify the index key");

Id prefix = formatIndexId(query.resultType(), index, key, true);
return prefixQuery(query, prefix);
}

查询到rocksdb后端的时候:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
protected BackendColumnIterator queryBy(Session session, Query query) {
// Query all
if (query.empty()) {
return this.queryAll(session, query);
}

// Query by prefix
if (query instanceof IdPrefixQuery) {
IdPrefixQuery pq = (IdPrefixQuery) query;
return this.queryByPrefix(session, pq);
}

// Query by range
if (query instanceof IdRangeQuery) {
IdRangeQuery rq = (IdRangeQuery) query;
return this.queryByRange(session, rq);
}

// Query by id
if (query.conditions().isEmpty()) {
assert !query.ids().isEmpty();
// NOTE: this will lead to lazy create rocksdb iterator
return new BackendColumnIteratorWrapper(new FlatMapperIterator<>(
query.ids().iterator(), id -> this.queryById(session, id)
));
}

// Query by condition (or condition + id)
ConditionQuery cq = (ConditionQuery) query;
return this.queryByCond(session, cq);
}

前缀查询:

1
2
3
4
5
6
7
8
protected BackendColumnIterator queryByPrefix(Session session,
IdPrefixQuery query) {
int type = query.inclusiveStart() ?
Session.SCAN_GTE_BEGIN : Session.SCAN_GT_BEGIN;
type |= Session.SCAN_PREFIX_END;
return session.scan(this.table(), query.start().asBytes(),
query.prefix().asBytes(), type);
}

range查询:

1
2
3
4
5
6
7
8
9
10
11
12
protected BackendColumnIterator queryByRange(Session session,
IdRangeQuery query) {
byte[] start = query.start().asBytes();
byte[] end = query.end() == null ? null : query.end().asBytes();
int type = query.inclusiveStart() ?
Session.SCAN_GTE_BEGIN : Session.SCAN_GT_BEGIN;
if (end != null) {
type |= query.inclusiveEnd() ?
Session.SCAN_LTE_END : Session.SCAN_LT_END;
}
return session.scan(this.table(), start, end, type);
}

查询后,在BinarySerializer中,通过readIndex还原为index:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@Override
public HugeIndex readIndex(HugeGraph graph, ConditionQuery query,
BackendEntry bytesEntry) {
if (bytesEntry == null) {
return null;
}

BinaryBackendEntry entry = this.convertEntry(bytesEntry);
// NOTE: index id without length prefix
byte[] bytes = entry.id().asBytes();
HugeIndex index = HugeIndex.parseIndexId(graph, entry.type(), bytes);

Object fieldValues = null;
if (!index.type().isRangeIndex()) {
fieldValues = query.condition(HugeKeys.FIELD_VALUES);
if (!index.fieldValues().equals(fieldValues)) {
// Update field-values for hashed or encoded index-id
index.fieldValues(fieldValues);
}
}

this.parseIndexName(graph, query, entry, index, fieldValues);
return index;
}

parseIndexId 和parseIndexName 是存储的decode操作,代码类似,一个存,一个读。

索引与全局排序优化

这里提一个问题,要对符合条件的结果做全局排序怎么优化?

比如,我们需要按更新时间(update_time)排序,当没有其他条件时,可以将排序转换为update_time>0 的查询,因为range索引默认是有序的,从小到大(详见上面的存储结构分析)。

如果要倒序怎么办?

  • 业务简单时,可以冗余一个字段,比如update_time_desc,取一个固定值-update_time, 这样最新的的数据在前面。

但是,这种查询,在有其他条件时就无效了,详见doJointIndex,这种情况如何优化了?

我们下期再聊。


感谢您的认真阅读。

如果你觉得有帮助,欢迎点赞支持!

不定期分享软件开发经验,欢迎关注作者, 一起交流软件开发:

这里放部门规范

文章作者:jqpeng
原文链接: chemfig化学式转换为pdf

SMILES 与 chemfig

针对化学分子结构,可以用SMILES (用ASCII字符串明确描述分子结构的规范)来定义。

SMILES(Simplified molecular input line entry specification),简化分子线性输入规范,是一种用ASCII字符串明确描述分子结构的规范。SMILES由Arthur Weininger和David Weininger于20世纪80年代晚期开发,并由其他人,尤其是日光化学信息系统有限公司(Daylight Chemical Information Systems Inc.),修改和扩展

但是SMILES需要一定的化学基础,而chemfig则是从图形层面定义了一套规范,方便定义和显示化学式。当然,SMILES可以方便的转换到chemfig.

比如:

CN1C=NC2=C1C(=O)N(C(=O)N2C)C

可通过mol2chemfig进行转换:

mol2chemfig -wz -i direct 'CN1C=NC2=C1C(=O)N(C(=O)N2C)C' > caffeine.tex

转换后:

\chemfig{-[:138]N-[:84]=^[:156]N-[:228]=[:300](-[:240](-[:180]N(-[:240]%
)-[:120](-[:60]N(-[:120])-)=[:180]O)=[:300]O)-[:12]\phantom{N}}

chemfig 转换为pdf

我们可以通过pdflatex(textlive的一个工具)来转换tex为pdf:

拉取txtlive镜像:

docker pull listx/texlive:2020
docker run -it --rm -v `pwd`:/app listx/texlive:2020 bash

然后用pdflatex转换。首先,我们生成一个tex文件test.tex,一个空tex文件,使用mol2chemfig(可从mol2chemfig下载),中间放上\chemfig{H_3C-[:30]N**6(-(=O)-(**5(-N(-CH_3)--N-))--N(-CH_3)-(=O)-)}

\documentclass{minimal}
\usepackage{xcolor, mol2chemfig}
\usepackage[margin=(margin)spt,papersize={%(width)spt, %(height)spt}]{geometry}

\usepackage[helvet]{sfmath}
\setcrambond{2.5pt}{0.4pt}{1.0pt}
\setbondoffset{1pt}
\setdoublesep{2pt}
\setatomsep{%(atomsep)spt}
\renewcommand{\printatom}[1]{\fontsize{8pt}{10pt}\selectfont{\ensuremath{\mathsf{#1}}}}

\setlength{\parindent}{0pt}
\setlength{\fboxsep}{0pt}
\begin{document}
\vspace*{\fill}
\vspace{-8pt}
\begin{center}

\chemfig{H_3C-[:30]N**6(-(=O)-(**5(-N(-CH_3)--N-))--N(-CH_3)-(=O)-)}

\end{center}
\vspace*{\fill}
\end{document}

然后执行转换:

pdflatex -interaction=nonstopmode  test.tex

等待1~2s,可以看到生成的pdf,打开:

PDF

如何返回给前端呢,可以读取文件,然后转换为base64,python代码:

pdfstring = open('test.pdf').read()
encoded = base64.encodestring(pdfstring)
pdflink = "data:application/pdf;base64,{}".format(encoded)

感谢您的认真阅读。

如果你觉得有帮助,欢迎点赞支持!

不定期分享软件开发经验,欢迎关注作者, 一起交流软件开发:

文章作者:jqpeng
原文链接: 从wav到Ogg Opus 以及使用java解码OPUS

PCM

自然界中的声音非常复杂,波形极其复杂,通常我们采用的是脉冲代码调制编码,即PCM编码。PCM通过抽样、量化、编码三个步骤将连续变化的模拟信号转换为数字编码。

采样率

采样频率,也称为采样速度或者采样率,定义了每秒从连续信号中提取并组成离散信号的采样个数,它用赫兹(Hz)来表示。采样频率的倒数是采样周期或者叫作采样时间,它是采样之间的时间间隔。通俗的讲采样频率是指计算机每秒钟采集多少个信号样本。

工业界常用的16K,就是1s有16000个采样点。

WAV

PCM是原始语音,依据采样率的定义,我们知道要播放PCM,需要知道采样率,因此需要一个文件格式可以封装PCM,wav就是微软公司专门为Windows开发的一种标准数字音频文件,该文件能记录各种单声道或立体声的声音信息。

WAV格式

wav文件前44个字节,定义了采样率,channel等参数,播放器通过这个数据就可以播放PCM数据了。

MP3

wav 很好的解决了PCM播放的问题,但是PCM实在是太大了,因此出现了mp3等音频格式,通过一定的压缩算法压缩语音,以便于互联网传输分享。

Ogg 与 Opus

随着音视频应用的越来越广泛,工业界有了越来越多的编解码器,比如Speek,Opus

Opus编解码器是专门设计用于互联网的交互式语音和音频传输。它是由IETF的编解码器工作组设计的,合并了Skype的SILK和Xiph. Org的CELT技术。

OPUS

OPUS编解码

https://github.com/lostromb/concentus 是一个纯java库,可以编解码OPUS。

OPUS一般是分帧编码,比如一个320采样点(640字节)的数据,编码后为70多个字节,和PCM一样,编码后的OPUS不能直接播放:

  • 无法从文件本身获取音频的元数据(采样率,声道数,码率等)
  • 缺少帧分隔标识,无法从连续的文件流中分隔帧(尤其是vbr情况)

伴随着HTML5的发展,出现了OGG媒体文件格式,Ogg是一个自由且开放标准的多媒体文件格式,由Xiph.Org基金会所维护。Ogg格式并不受到软件专利的限制,并设计用于有效率地流媒体和处理高质量的数字多媒体。“Ogg”意指一种文件格式,可以纳入各式各样自由和开放源代码的编解码器,包含音效、视频、文字(像字幕)与元数据的处理。

OGG音频

压缩类型 格式 说明
有损 Speek 以低比特率处理语音数据(〜2.1-32 kbit / s /通道)
Vorbis 处理中高级可变比特率(每通道≈16-500kbit / s)的一般音频数据
Opus: 以低和高可变比特率处理语音,音乐和通用音频(每通道≈6-510kbit / s)
无损 FLAC 处理文件和高保真音频数据
未压缩 OggPCM 处理未压缩的PCM音频,与WAV类似

参考: https://juejin.cn/post/6844904016254599175

借博主的图:

OGG封装

java 解码OPUS文件

通过ffmpeg可以轻松的将wav转换为opus文件,本质是一个ogg封装的opus,我们可以通过vorbis-java 来读取opus文件。

通过OpusInfoTool,可以打印OPUS文件信息:

Processing file "C:\Users\jqpeng\Downloads\opus\wav16k.opus"

Opus Headers:
  Version: 1
  Vendor: Lavf58.27.103
  Channels: 1
  Rate: 16000Hz
  Pre-Skip: 104
  Playback Gain: 0dB

User Comments:
  encoder=Lavc58.53.100 libopus

Logical stream 81c1bbc0 (-2118009920) completed

Opus Audio:
  Total Data Packets: 579
  Total Data Length: 41406
  Audio Length Seconds: 11.564333333333334
  Audio Length: 00:00:11.56
  Packet duration:     20.0ms (max),     20.0ms (avg),     20.0ms (min)
  Page duration:     1000.0ms (max),    965.0ms (avg),    580.0ms (min)
  Total data length: 41406 (overhead: 2.34%)
  Playback length: 00:00:11.56
  Average bitrate: 28.70 kb/s, w/o overhead: 27.97 kb/s

再借助concentus ,我们来解码OPUS文件为PCM文件。

public void testDecode() throws IOException, OpusException {
        FileInputStream fs = new FileInputStream("\\wav16k.opus");
        OggFile ogg = new OggFile(fs);
        OpusFile of = new OpusFile(ogg);
        OpusAudioData ad = null;

        System.out.println(of.getInfo().getSampleRate());
        System.out.println(of.getInfo().getNumChannels());

        OpusDecoder decoder = new OpusDecoder(of.getInfo().getSampleRate(),
                                              of.getInfo().getNumChannels());
        System.out.println(of.getTags());
        FileOutputStream fileOut = new FileOutputStream("wav16k.pcm");
        // 
        byte[] data_packet = new byte[of.getInfo().getSampleRate()];
        int samples = 0;
        while ((ad = of.getNextAudioPacket()) != null) {
            // NOTE: samplesDecoded 是decode出来的short个数,byte需要*2
            int samplesDecoded =
                    decoder.decode(ad.getData(), 0, ad.getData().length
                            , data_packet, 0, of.getInfo().getSampleRate() / 2,
                                   false);

            fileOut.write(data_packet, 0, samplesDecoded * 2);
            samples += samplesDecoded;
        }

        System.out.println("samples: " + samples);
        System.out.println("durationSeconds: " + (samples / 16000f));
        fileOut.close();
    }

感谢您的认真阅读。

如果你觉得有帮助,欢迎点赞支持!

不定期分享软件开发经验,欢迎关注作者, 一起交流软件开发:

文章作者:jqpeng
原文链接: 使用jhipster 加速java web开发

jhipster,中文释义: Java 热爱者!

JHipster is a development platform to quickly generate, develop, & deploy modern web applications & microservice architectures.

JHipster 可以通过代码生成,让你快速开发web应用和微服务。

安装

  1. 安装Java,Git Node.js
  2. 安装 JHipster npm install -g generator-jhipster
  3. 创建应用目录 mkdir myApp && cd myApp
  4. 运行jhipster命令,根据提示设置应用
  5. 可以通过JDL Studio 来生成jhipster-jdl.jh文件
  6. 然后通过jhipster jdl jhipster-jdl.jh来生成代码,JDL 后续会重点介绍

JDL 入门

JDL 是jhipster的数据模型定义文件,通过这个文件我们可以定义数据结构,然后jhipster基于这个JDL,就可以生成实体类、服务类以及前端页面。

例如,我们要开发投诉建议,假如设计的数据表如下:

字段 comment 类型 备注
record_id 主键 Bigint 自增
feedback_type 反馈类型 unsigned tinyint 枚举值:[1:意见与建议;5:投诉]
title 标题 varchar(64)
content 问题描述 varchar(512)
feedback_status 反馈状态 unsigned tinyint 枚举值:[1:待提交;5:待回复;10:待确认;15:已解决;]
last_reply_time 最后回复时间 timestamp 与feedback_status联合使用,当状态为2的时候,更新此时间,用于超时判断
close_type 关闭类型 unsigned tinyint 枚举值:[1:正常关闭;5:超时关闭;]
created_date 创建时间 timestamp
created_by 创建者 char(32)

使用jhipster,我们可以用jdl来定义:

/**
 * 反馈记录表
 */
entity FeedbackRecord {
    /** 反馈类型*/
    feedbackType FeedbackType,
    /** 问题描述 */
    title String,
    /** 反馈状态     */
    feedbackStatus FeedbackStatus,
     /** 是否已完成 */
    lastReplyTime Integer,
     /** 关闭类型     */
    closeType FeedbackCloseType,
     /** 创建时间 */
    createdDate Instant,
    /**     创建者 */
    createdBy String
}
/** 反馈类型 */
enum FeedbackType {
    ADVICE,
    COMPLAINTS
}
/** 反馈状态 */
enum FeedbackStatus {
    TO_BE_SUBMIT, TO_BE_REPLY, TO_BE_CONFIRMED
}
/** 关闭类型 */
enum FeedbackCloseType {
    NORMALLY, TIMEOUT
}

dto * with mapstruct
service all with serviceImpl
paginate all with pagination

详细讲解:

实体和字段

entity 表示一个实体,可以增加字段,注意,不用增加id

语法是:

[<entity javadoc>]
[<entity annotation>*]
entity <entity name> [(<table name>)] {
  [<field javadoc>]
  [<field annotation>*]
  <field name> <field type> [<validation>*]
}

例如:

entity A {
  name String required
  age Integer min(42) max(42)
}

可以增加requiredminmax等验证

字段的注释:

/**
 * This is a comment
 * about a class
 * @author Someone
 */
entity A {
  /** 名称 */
   name String
   age Integer // this is yet another comment
}

JHipster支持许多字段类型。这种支持取决于您的数据库后端,因此我们使用Java类型来描述它们:JavaString将以不同的方式存储在Oracle或Cassandra中,这是JHipster的优势之一,可以为您生成正确的数据库访问代码。

  • String: Java字符串。它的默认大小取决于基础后端(如果使用JPA,默认情况下为255),但是您可以使用校验规则进行更改(例如,修改 max大小为1024)。
  • Integer: Java整数。
  • Long: Java长整数。
  • Float: Java浮点数.
  • Double: Java双精度浮点数.
  • BigDecimaljava.math.BigDecimal对象, 当您需要精确的数学计算时使用(通常用于财务操作)。
  • LocalDatejava.time.LocalDate对象, 用于正确管理Java中的日期。
  • Instantjava.time.Instant对象, 用于表示时间戳,即时间线上的瞬时点。
  • ZonedDateTimejava.time.ZonedDateTime对象, 用于表示给定时区(通常是日历中会议、约定)中的本地日期时间。请注意,REST和持久层都不支持时区,因此您很可能应该使用Instant
  • Durationjava.time.Duration对象, 用于表示时间量。
  • UUIDjava.util.UUID对象.
  • Boolean: Java布尔型.
  • Enumeration:Java枚举对象。选择此类型后,子生成器将询问您要在枚举中使用哪些值,并将创建一个特定的enum类来存储它们。
  • Blob: Blob对象,用于存储一些二进制数据。选择此类型时,子生成器将询问您是否要存储通用二进制数据,图像对象或CLOB(长文本)。图像将专门在Angular侧进行优化处理,因此可以将其正常显示给最终用户。

字段的数据类型及数据库支持:

数据类型

枚举

对于可枚举的状态,建议采用枚举值:

enum [<enum name>] {
  <ENUM KEY> ([<enum value>])
}

例如:

/** 反馈类型 */
enum FeedbackType {
    ADVICE,
    COMPLAINTS
}

关系

SQL数据库支持表和表的关联:

  • OneToOne
  • OneToMany
  • ManyToOne
  • ManyToMany

如何定义关系呢?

relationship (OneToMany | ManyToOne | OneToOne | ManyToMany) {
  <from entity>[{<relationship name>[(<display field>)]}] to <to entity>[{<relationship name>[(<display field>)]}]+
}

例如, 下面的例子里,我们定义两个对象,FileChunk,1个Chunk属于一个File

/**
 * 文件
 */
entity File {
    /** 文件名 */
    name String,
    /** 文件大小 */
    size Long,
    /** 文件路径 */
    path String,
    /** 分片数 */
    chunks Integer,
     /** 是否已完成 */
    complete Integer
}

/**
 * 文件分片
 */
entity Chunk {
    /** md5值 */
    md5 String,
    /** 分片序号 */
    number Integer,
    /** 分片名称 */
    name String
}

relationship ManyToOne {
    /** 所属文件 */
    Chunk{file} to File
}

对应的关系图:

关系图

生成代码配置

JHipster提供了丰富的配置,可以用来指定生成代码时的策略,例如是否要生成DTO对象,是否需要支持分页,是否需要生成service类,如果生成service,是使用serviceClass还是serviceImpl

示例如下:

entity A {
  name String required
}
entity B
entity C

// 筛选实体
filter *

// 生成dto
dto A, B with mapstruct

// 分页
paginate A with infinite-scroll
paginate B with pagination
paginate C with pager  // pager is only available in AngularJS

// 生成service
service A with serviceClass
service C with serviceImpl

生成代码

首先定义jdl文件:

/**
 * 反馈记录表
 */
entity FeedbackRecord {
    /** 反馈类型*/
    feedbackType FeedbackType,
    /** 问题描述 */
    title String,
    /** 反馈状态     */
    feedbackStatus FeedbackStatus,
     /** 是否已完成 */
    lastReplyTime Integer,
     /** 关闭类型     */
    closeType FeedbackCloseType,
     /** 创建时间 */
    createdDate Instant,
    /**     创建者 */
    createdBy String
}
/** 反馈类型 */
enum FeedbackType {
    ADVICE,
    COMPLAINTS
}
/** 反馈状态 */
enum FeedbackStatus {
    TO_BE_SUBMIT, TO_BE_REPLY, TO_BE_CONFIRMED
}
/** 关闭类型 */
enum FeedbackCloseType {
    NORMALLY, TIMEOUT
}
// 筛选实体
filter *
// 生成DTO
dto * with mapstruct
// 生成带接口和实现的service
service all with serviceImpl
// 支持分页
paginate all with pagination

然后生成代码:

jhipster jdl feedback.jh --force

可以看到类似下面的输出

D:\Project\jhipster-7>jhipster jdl feedback.jh --force
INFO! Using JHipster version installed locally in current project's node_modules
INFO! Executing import-jdl feedback.jh
INFO! The JDL is being parsed.
info: The dto option is set for FeedbackRecord, the 'serviceClass' value for the 'service' is gonna be set for this entity if no other value has been set.
INFO! Found entities: FeedbackRecord.
INFO! The JDL has been successfully parsed
INFO! Generating 0 applications.
INFO! Generating 1 entity.
INFO! Generating entities for application undefined in a new parallel process

Found the D:\Project\jhipster-7\.jhipster\File.json configuration file, entity can be automatically generated!


Found the D:\Project\jhipster-7\.jhipster\Chunk.json configuration file, entity can be automatically generated!


Found the D:\Project\jhipster-7\.jhipster\FeedbackRecord.json configuration file, entity can be automatically generated!

     info Creating changelog for entities File,Chunk,FeedbackRecord
    force .yo-rc.json
    force .jhipster\FeedbackRecord.json
    force .jhipster\File.json
    force .jhipster\Chunk.json
    force src\main\java\com\company\blog\domain\File.java
    force src\main\java\com\company\blog\web\rest\FileResource.java
    force src\main\java\com\company\blog\repository\FileRepository.java
    force src\main\java\com\company\blog\service\FileService.java
    force src\main\java\com\company\blog\service\impl\FileServiceImpl.java
    force src\main\java\com\company\blog\service\dto\FileDTO.java
    force src\main\java\com\company\blog\service\mapper\EntityMapper.java
    force src\main\java\com\company\blog\service\mapper\FileMapper.java
    force src\test\java\com\company\blog\web\rest\FileResourceIT.java
    force src\test\java\com\company\blog\domain\FileTest.java
    force src\test\java\com\company\blog\service\dto\FileDTOTest.java
    force src\test\java\com\company\blog\service\mapper\FileMapperTest.java
    force src\main\webapp\app\shared\model\file.model.ts
    force src\main\webapp\app\entities\file\file-details.vue
    force src\main\webapp\app\entities\file\file-details.component.ts
    force src\main\webapp\app\entities\file\file.vue
    force src\main\webapp\app\entities\file\file.component.ts
    force src\main\webapp\app\entities\file\file.service.ts
    force src\main\webapp\app\entities\file\file-update.vue
    force src\main\webapp\app\entities\file\file-update.component.ts
    force src\test\javascript\spec\app\entities\file\file.component.spec.ts
    force src\test\javascript\spec\app\entities\file\file-details.component.spec.ts
    force src\test\javascript\spec\app\entities\file\file.service.spec.ts
    force src\test\javascript\spec\app\entities\file\file-update.component.spec.ts
    force src\main\webapp\app\router\entities.ts
    force src\main\webapp\app\main.ts
    force src\main\webapp\app\core\jhi-navbar\jhi-navbar.vue
    force src\main\webapp\i18n\zh-cn\file.json
    force src\main\webapp\i18n\zh-cn\global.json
    force src\main\webapp\i18n\en\file.json
    force src\main\webapp\i18n\en\global.json
    force src\main\java\com\company\blog\domain\Chunk.java
    force src\main\java\com\company\blog\web\rest\ChunkResource.java
    force src\main\java\com\company\blog\repository\ChunkRepository.java
    force src\main\java\com\company\blog\service\ChunkService.java
    force src\main\java\com\company\blog\service\impl\ChunkServiceImpl.java
    force src\main\java\com\company\blog\service\dto\ChunkDTO.java
    force src\main\java\com\company\blog\service\mapper\ChunkMapper.java
    force src\test\java\com\company\blog\web\rest\ChunkResourceIT.java
    force src\test\java\com\company\blog\domain\ChunkTest.java
    force src\test\java\com\company\blog\service\dto\ChunkDTOTest.java
    force src\test\java\com\company\blog\service\mapper\ChunkMapperTest.java
    force src\main\webapp\app\shared\model\chunk.model.ts
    force src\main\webapp\app\entities\chunk\chunk-details.vue
    force src\main\webapp\app\entities\chunk\chunk-details.component.ts
    force src\main\webapp\app\entities\chunk\chunk.vue
    force src\main\webapp\app\entities\chunk\chunk.component.ts
    force src\main\webapp\app\entities\chunk\chunk.service.ts
    force src\main\webapp\app\entities\chunk\chunk-update.vue
    force src\main\webapp\app\entities\chunk\chunk-update.component.ts
    force src\test\javascript\spec\app\entities\chunk\chunk.component.spec.ts
    force src\test\javascript\spec\app\entities\chunk\chunk-details.component.spec.ts
    force src\test\javascript\spec\app\entities\chunk\chunk.service.spec.ts
    force src\test\javascript\spec\app\entities\chunk\chunk-update.component.spec.ts
    force src\main\webapp\i18n\zh-cn\chunk.json
    force src\main\webapp\i18n\en\chunk.json
   create src\main\java\com\company\blog\domain\FeedbackRecord.java
   create src\main\java\com\company\blog\web\rest\FeedbackRecordResource.java
   create src\main\java\com\company\blog\repository\FeedbackRecordRepository.java
   create src\main\java\com\company\blog\service\FeedbackRecordService.java
   create src\main\java\com\company\blog\service\impl\FeedbackRecordServiceImpl.java
   create src\main\java\com\company\blog\service\dto\FeedbackRecordDTO.java
   create src\main\java\com\company\blog\service\mapper\FeedbackRecordMapper.java
   create src\test\java\com\company\blog\web\rest\FeedbackRecordResourceIT.java
   create src\test\java\com\company\blog\domain\FeedbackRecordTest.java
   create src\test\java\com\company\blog\service\dto\FeedbackRecordDTOTest.java
   create src\test\java\com\company\blog\service\mapper\FeedbackRecordMapperTest.java
   create src\main\java\com\company\blog\domain\enumeration\FeedbackType.java
   create src\main\java\com\company\blog\domain\enumeration\FeedbackStatus.java
   create src\main\java\com\company\blog\domain\enumeration\FeedbackCloseType.java
   create src\main\webapp\app\shared\model\feedback-record.model.ts
   create src\main\webapp\app\entities\feedback-record\feedback-record-details.vue
   create src\main\webapp\app\entities\feedback-record\feedback-record-details.component.ts
   create src\main\webapp\app\entities\feedback-record\feedback-record.vue
   create src\main\webapp\app\entities\feedback-record\feedback-record.component.ts
   create src\main\webapp\app\entities\feedback-record\feedback-record.service.ts
    force src\main\resources\config\liquibase\changelog\20210312045459_added_entity_File.xml
    force src\main\resources\config\liquibase\fake-data\file.csv
   create src\main\webapp\app\entities\feedback-record\feedback-record-update.vue
    force src\main\resources\config\liquibase\master.xml
    force src\main\resources\config\liquibase\changelog\20210312045500_added_entity_Chunk.xml
    force src\main\resources\config\liquibase\changelog\20210312045500_added_entity_constraints_Chunk.xml
    force src\main\resources\config\liquibase\fake-data\chunk.csv
   create src\main\resources\config\liquibase\changelog\20210312072243_added_entity_FeedbackRecord.xml
   create src\main\resources\config\liquibase\fake-data\feedback_record.csv
   create src\main\webapp\app\entities\feedback-record\feedback-record-update.component.ts
   create src\test\javascript\spec\app\entities\feedback-record\feedback-record.component.spec.ts
   create src\test\javascript\spec\app\entities\feedback-record\feedback-record-details.component.spec.ts
   create src\test\javascript\spec\app\entities\feedback-record\feedback-record.service.spec.ts
   create src\test\javascript\spec\app\entities\feedback-record\feedback-record-update.component.spec.ts
   create src\main\webapp\app\shared\model\enumerations\feedback-type.model.ts
   create src\main\webapp\app\shared\model\enumerations\feedback-status.model.ts
   create src\main\webapp\app\shared\model\enumerations\feedback-close-type.model.ts
   create src\main\webapp\i18n\zh-cn\feedbackType.json
   create src\main\webapp\i18n\en\feedbackType.json
   create src\main\webapp\i18n\zh-cn\feedbackStatus.json
   create src\main\webapp\i18n\en\feedbackStatus.json
   create src\main\webapp\i18n\zh-cn\feedbackCloseType.json
   create src\main\webapp\i18n\en\feedbackCloseType.json
   create src\main\webapp\i18n\zh-cn\feedbackRecord.json
   create src\main\webapp\i18n\en\feedbackRecord.json
Entity File generated successfully.
Entity Chunk generated successfully.
Entity FeedbackRecord generated successfully.

Running `webapp:build` to update client app

包含domain、service、controller等都有生成:

测试生成程序

运行程序,

列表页面:

列表页面

编辑页面:

编辑页面

生成代码介绍

Domain

package com.company.blog.domain;

import com.company.blog.domain.enumeration.FeedbackCloseType;
import com.company.blog.domain.enumeration.FeedbackStatus;
import com.company.blog.domain.enumeration.FeedbackType;
import java.io.Serializable;
import java.time.Instant;
import javax.persistence.*;

/**
 * 反馈记录表
 */
@Entity
@Table(name = "feedback_record")
public class FeedbackRecord implements Serializable {

    private static final long serialVersionUID = 1L;

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    /**
     * 反馈类型
     */
    @Enumerated(EnumType.STRING)
    @Column(name = "feedback_type")
    private FeedbackType feedbackType;

    /**
     * 问题描述
     */
    @Column(name = "title")
    private String title;

    /**
     * 反馈状态
     */
    @Enumerated(EnumType.STRING)
    @Column(name = "feedback_status")
    private FeedbackStatus feedbackStatus;

    /**
     * 是否已完成
     */
    @Column(name = "last_reply_time")
    private Integer lastReplyTime;

    /**
     * 关闭类型
     */
    @Enumerated(EnumType.STRING)
    @Column(name = "close_type")
    private FeedbackCloseType closeType;

    /**
     * 创建时间
     */
    @Column(name = "created_date")
    private Instant createdDate;

    /**
     * 创建者
     */
    @Column(name = "created_by")
    private String createdBy;

    // jhipster-needle-entity-add-field - JHipster will add fields here
    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public FeedbackRecord id(Long id) {
        this.id = id;
        return this;
    }

    public FeedbackType getFeedbackType() {
        return this.feedbackType;
    }

    public FeedbackRecord feedbackType(FeedbackType feedbackType) {
        this.feedbackType = feedbackType;
        return this;
    }

    public void setFeedbackType(FeedbackType feedbackType) {
        this.feedbackType = feedbackType;
    }

    public String getTitle() {
        return this.title;
    }

    public FeedbackRecord title(String title) {
        this.title = title;
        return this;
    }

    public void setTitle(String title) {
        this.title = title;
    }

    public FeedbackStatus getFeedbackStatus() {
        return this.feedbackStatus;
    }

    public FeedbackRecord feedbackStatus(FeedbackStatus feedbackStatus) {
        this.feedbackStatus = feedbackStatus;
        return this;
    }

    public void setFeedbackStatus(FeedbackStatus feedbackStatus) {
        this.feedbackStatus = feedbackStatus;
    }

    public Integer getLastReplyTime() {
        return this.lastReplyTime;
    }

    public FeedbackRecord lastReplyTime(Integer lastReplyTime) {
        this.lastReplyTime = lastReplyTime;
        return this;
    }

    public void setLastReplyTime(Integer lastReplyTime) {
        this.lastReplyTime = lastReplyTime;
    }

    public FeedbackCloseType getCloseType() {
        return this.closeType;
    }

    public FeedbackRecord closeType(FeedbackCloseType closeType) {
        this.closeType = closeType;
        return this;
    }

    public void setCloseType(FeedbackCloseType closeType) {
        this.closeType = closeType;
    }

    public Instant getCreatedDate() {
        return this.createdDate;
    }

    public FeedbackRecord createdDate(Instant createdDate) {
        this.createdDate = createdDate;
        return this;
    }

    public void setCreatedDate(Instant createdDate) {
        this.createdDate = createdDate;
    }

    public String getCreatedBy() {
        return this.createdBy;
    }

    public FeedbackRecord createdBy(String createdBy) {
        this.createdBy = createdBy;
        return this;
    }

    public void setCreatedBy(String createdBy) {
        this.createdBy = createdBy;
    }

    // jhipster-needle-entity-add-getters-setters - JHipster will add getters and setters here

    @Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (!(o instanceof FeedbackRecord)) {
            return false;
        }
        return id != null && id.equals(((FeedbackRecord) o).id);
    }

    @Override
    public int hashCode() {
        // see https://vladmihalcea.com/how-to-implement-equals-and-hashcode-using-the-jpa-entity-identifier/
        return getClass().hashCode();
    }

    // prettier-ignore
    @Override
    public String toString() {
        return "FeedbackRecord{" +
            "id=" + getId() +
            ", feedbackType='" + getFeedbackType() + "'" +
            ", title='" + getTitle() + "'" +
            ", feedbackStatus='" + getFeedbackStatus() + "'" +
            ", lastReplyTime=" + getLastReplyTime() +
            ", closeType='" + getCloseType() + "'" +
            ", createdDate='" + getCreatedDate() + "'" +
            ", createdBy='" + getCreatedBy() + "'" +
            "}";
    }
}

Repository

@SuppressWarnings("unused")
@Repository
public interface FeedbackRecordRepository extends JpaRepository<FeedbackRecord, Long>, JpaSpecificationExecutor<FeedbackRecord> {}

Service

/**
 * Service Interface for managing {@link com.company.blog.domain.FeedbackRecord}.
 */
public interface FeedbackRecordService {
    /**
     * Save a feedbackRecord.
     *
     * @param feedbackRecordDTO the entity to save.
     * @return the persisted entity.
     */
    FeedbackRecordDTO save(FeedbackRecordDTO feedbackRecordDTO);

    /**
     * Partially updates a feedbackRecord.
     *
     * @param feedbackRecordDTO the entity to update partially.
     * @return the persisted entity.
     */
    Optional<FeedbackRecordDTO> partialUpdate(FeedbackRecordDTO feedbackRecordDTO);

    /**
     * Get all the feedbackRecords.
     *
     * @param pageable the pagination information.
     * @return the list of entities.
     */
    Page<FeedbackRecordDTO> findAll(Pageable pageable);

    /**
     * Get the "id" feedbackRecord.
     *
     * @param id the id of the entity.
     * @return the entity.
     */
    Optional<FeedbackRecordDTO> findOne(Long id);

    /**
     * Delete the "id" feedbackRecord.
     *
     * @param id the id of the entity.
     */
    void delete(Long id);
}

Controller

/**
 * REST controller for managing {@link com.company.blog.domain.FeedbackRecord}.
 */
@RestController
@RequestMapping("/api")
public class FeedbackRecordResource {

    private final Logger log = LoggerFactory.getLogger(FeedbackRecordResource.class);

    private static final String ENTITY_NAME = "feedbackRecord";

    @Value("${jhipster.clientApp.name}")
    private String applicationName;

    private final FeedbackRecordService feedbackRecordService;

    private final FeedbackRecordQueryService feedbackRecordQueryService;

    public FeedbackRecordResource(FeedbackRecordService feedbackRecordService, FeedbackRecordQueryService feedbackRecordQueryService) {
        this.feedbackRecordService = feedbackRecordService;
        this.feedbackRecordQueryService = feedbackRecordQueryService;
    }

    /**
     * {@code POST  /feedback-records} : Create a new feedbackRecord.
     *
     * @param feedbackRecordDTO the feedbackRecordDTO to create.
     * @return the {@link ResponseEntity} with status {@code 201 (Created)} and with body the new feedbackRecordDTO, or with status {@code 400 (Bad Request)} if the feedbackRecord has already an ID.
     * @throws URISyntaxException if the Location URI syntax is incorrect.
     */
    @PostMapping("/feedback-records")
    public ResponseEntity<FeedbackRecordDTO> createFeedbackRecord(@RequestBody FeedbackRecordDTO feedbackRecordDTO)
        throws URISyntaxException {
        log.debug("REST request to save FeedbackRecord : {}", feedbackRecordDTO);
        if (feedbackRecordDTO.getId() != null) {
            throw new BadRequestAlertException("A new feedbackRecord cannot already have an ID", ENTITY_NAME, "idexists");
        }
        FeedbackRecordDTO result = feedbackRecordService.save(feedbackRecordDTO);
        return ResponseEntity
            .created(new URI("/api/feedback-records/" + result.getId()))
            .headers(HeaderUtil.createEntityCreationAlert(applicationName, true, ENTITY_NAME, result.getId().toString()))
            .body(result);
    }

    /**
     * {@code PUT  /feedback-records} : Updates an existing feedbackRecord.
     *
     * @param feedbackRecordDTO the feedbackRecordDTO to update.
     * @return the {@link ResponseEntity} with status {@code 200 (OK)} and with body the updated feedbackRecordDTO,
     * or with status {@code 400 (Bad Request)} if the feedbackRecordDTO is not valid,
     * or with status {@code 500 (Internal Server Error)} if the feedbackRecordDTO couldn't be updated.
     * @throws URISyntaxException if the Location URI syntax is incorrect.
     */
    @PutMapping("/feedback-records")
    public ResponseEntity<FeedbackRecordDTO> updateFeedbackRecord(@RequestBody FeedbackRecordDTO feedbackRecordDTO)
        throws URISyntaxException {
        log.debug("REST request to update FeedbackRecord : {}", feedbackRecordDTO);
        if (feedbackRecordDTO.getId() == null) {
            throw new BadRequestAlertException("Invalid id", ENTITY_NAME, "idnull");
        }
        FeedbackRecordDTO result = feedbackRecordService.save(feedbackRecordDTO);
        return ResponseEntity
            .ok()
            .headers(HeaderUtil.createEntityUpdateAlert(applicationName, true, ENTITY_NAME, feedbackRecordDTO.getId().toString()))
            .body(result);
    }

    /**
     * {@code PATCH  /feedback-records} : Updates given fields of an existing feedbackRecord.
     *
     * @param feedbackRecordDTO the feedbackRecordDTO to update.
     * @return the {@link ResponseEntity} with status {@code 200 (OK)} and with body the updated feedbackRecordDTO,
     * or with status {@code 400 (Bad Request)} if the feedbackRecordDTO is not valid,
     * or with status {@code 404 (Not Found)} if the feedbackRecordDTO is not found,
     * or with status {@code 500 (Internal Server Error)} if the feedbackRecordDTO couldn't be updated.
     * @throws URISyntaxException if the Location URI syntax is incorrect.
     */
    @PatchMapping(value = "/feedback-records", consumes = "application/merge-patch+json")
    public ResponseEntity<FeedbackRecordDTO> partialUpdateFeedbackRecord(@RequestBody FeedbackRecordDTO feedbackRecordDTO)
        throws URISyntaxException {
        log.debug("REST request to update FeedbackRecord partially : {}", feedbackRecordDTO);
        if (feedbackRecordDTO.getId() == null) {
            throw new BadRequestAlertException("Invalid id", ENTITY_NAME, "idnull");
        }

        Optional<FeedbackRecordDTO> result = feedbackRecordService.partialUpdate(feedbackRecordDTO);

        return ResponseUtil.wrapOrNotFound(
            result,
            HeaderUtil.createEntityUpdateAlert(applicationName, true, ENTITY_NAME, feedbackRecordDTO.getId().toString())
        );
    }

    /**
     * {@code GET  /feedback-records} : get all the feedbackRecords.
     *
     * @param pageable the pagination information.
     * @param criteria the criteria which the requested entities should match.
     * @return the {@link ResponseEntity} with status {@code 200 (OK)} and the list of feedbackRecords in body.
     */
    @GetMapping("/feedback-records")
    public ResponseEntity<List<FeedbackRecordDTO>> getAllFeedbackRecords(FeedbackRecordCriteria criteria, Pageable pageable) {
        log.debug("REST request to get FeedbackRecords by criteria: {}", criteria);
        Page<FeedbackRecordDTO> page = feedbackRecordQueryService.findByCriteria(criteria, pageable);
        HttpHeaders headers = PaginationUtil.generatePaginationHttpHeaders(ServletUriComponentsBuilder.fromCurrentRequest(), page);
        return ResponseEntity.ok().headers(headers).body(page.getContent());
    }

    /**
     * {@code GET  /feedback-records/count} : count all the feedbackRecords.
     *
     * @param criteria the criteria which the requested entities should match.
     * @return the {@link ResponseEntity} with status {@code 200 (OK)} and the count in body.
     */
    @GetMapping("/feedback-records/count")
    public ResponseEntity<Long> countFeedbackRecords(FeedbackRecordCriteria criteria) {
        log.debug("REST request to count FeedbackRecords by criteria: {}", criteria);
        return ResponseEntity.ok().body(feedbackRecordQueryService.countByCriteria(criteria));
    }

    /**
     * {@code GET  /feedback-records/:id} : get the "id" feedbackRecord.
     *
     * @param id the id of the feedbackRecordDTO to retrieve.
     * @return the {@link ResponseEntity} with status {@code 200 (OK)} and with body the feedbackRecordDTO, or with status {@code 404 (Not Found)}.
     */
    @GetMapping("/feedback-records/{id}")
    public ResponseEntity<FeedbackRecordDTO> getFeedbackRecord(@PathVariable Long id) {
        log.debug("REST request to get FeedbackRecord : {}", id);
        Optional<FeedbackRecordDTO> feedbackRecordDTO = feedbackRecordService.findOne(id);
        return ResponseUtil.wrapOrNotFound(feedbackRecordDTO);
    }

    /**
     * {@code DELETE  /feedback-records/:id} : delete the "id" feedbackRecord.
     *
     * @param id the id of the feedbackRecordDTO to delete.
     * @return the {@link ResponseEntity} with status {@code 204 (NO_CONTENT)}.
     */
    @DeleteMapping("/feedback-records/{id}")
    public ResponseEntity<Void> deleteFeedbackRecord(@PathVariable Long id) {
        log.debug("REST request to delete FeedbackRecord : {}", id);
        feedbackRecordService.delete(id);
        return ResponseEntity
            .noContent()
            .headers(HeaderUtil.createEntityDeletionAlert(applicationName, true, ENTITY_NAME, id.toString()))
            .build();
    }
}

测试接口

获取数据

    @GetMapping("/feedback-records")
    public ResponseEntity<List<FeedbackRecordDTO>> getAllFeedbackRecords(FeedbackRecordCriteria criteria, Pageable pageable) {
        log.debug("REST request to get FeedbackRecords by criteria: {}", criteria);
        Page<FeedbackRecordDTO> page = feedbackRecordQueryService.findByCriteria(criteria, pageable);
        HttpHeaders headers = PaginationUtil.generatePaginationHttpHeaders(ServletUriComponentsBuilder.fromCurrentRequest(), page);
        return ResponseEntity.ok().headers(headers).body(page.getContent());
    }

这里有意思的是FeedbackRecordCriteria,可以针对实体中的每个字段进行过滤,不用单独写业务代码去过滤:

比如feedbackStatus是一个枚举,那么可以使用equalsin 等过过滤器。

feedbackStatus

关于过滤器:

过滤器

我们测试下,比如查询feedbackStatus 为 TO_BE_REPLY的,那么可以使用feedbackStatus.equals=TO_BE_REPLY

GET http://localhost:8080/api/feedback-records?sort=id,asc&page=0&size=20&feedbackStatus.equals=TO_BE_REPLY




[
    {
        "id": 1,
        "feedbackType": "COMPLAINTS",
        "title": "SMTP lavender Table",
        "feedbackStatus": "TO_BE_REPLY",
        "lastReplyTime": 9391,
        "closeType": "NORMALLY",
        "createdDate": "2021-03-11T21:38:31Z",
        "createdBy": "新疆 Central Soft"
    },
    {
        "id": 2,
        "feedbackType": "ADVICE",
        "title": "上海市 haptic",
        "feedbackStatus": "TO_BE_REPLY",
        "lastReplyTime": 53521,
        "closeType": "NORMALLY",
        "createdDate": "2021-03-11T18:04:14Z",
        "createdBy": "Rubber connect 桥"
    },
    {
        "id": 4,
        "feedbackType": "ADVICE",
        "title": "Senior index",
        "feedbackStatus": "TO_BE_REPLY",
        "lastReplyTime": 67874,
        "closeType": "TIMEOUT",
        "createdDate": "2021-03-11T14:53:15Z",
        "createdBy": "Uganda"
    },
    {
        "id": 6,
        "feedbackType": "ADVICE",
        "title": "Expanded Sports compelling",
        "feedbackStatus": "TO_BE_REPLY",
        "lastReplyTime": 8032,
        "closeType": "TIMEOUT",
        "createdDate": "2021-03-12T03:53:46Z",
        "createdBy": "deposit Chicken mesh"
    },
    {
        "id": 7,
        "feedbackType": "ADVICE",
        "title": "Division overriding",
        "feedbackStatus": "TO_BE_REPLY",
        "lastReplyTime": 38000,
        "closeType": "NORMALLY",
        "createdDate": "2021-03-11T07:57:51Z",
        "createdBy": "Account stable"
    },
    {
        "id": 9,
        "feedbackType": "ADVICE",
        "title": "Loan",
        "feedbackStatus": "TO_BE_REPLY",
        "lastReplyTime": 99908,
        "closeType": "TIMEOUT",
        "createdDate": "2021-03-11T09:47:44Z",
        "createdBy": "Re-engineered"
    }
]

JHipster 管理界面介绍

JHipster 自动生成的前端代码里,包含了一些管理界面:

管理界面

资源监控:

资源监控

开发实践

更新实体增加字段

在项目的工程目录下,有一个.jhipster文件夹,里面包含了已有的实体。

已有实体

要为实体增加字段,可以打开json文件,在fields里新增即可,比如我们增加一个content字段。

{
  "name": "FeedbackRecord",
  "fields": [
    {
      "fieldName": "feedbackType",
      "fieldType": "FeedbackType",
      "javadoc": "反馈类型",
      "fieldValues": "ADVICE,COMPLAINTS"
    },
    {
      "fieldName": "title",
      "fieldType": "String",
      "javadoc": "问题描述"
    },
    {
      "fieldName": "content",
      "fieldType": "String",
      "javadoc": "问题详情"
    },
    {
      "fieldName": "feedbackStatus",
      "fieldType": "FeedbackStatus",
      "javadoc": "反馈状态",
      "fieldValues": "TO_BE_SUBMIT,TO_BE_REPLY,TO_BE_CONFIRMED"
    },
    {
      "fieldName": "lastReplyTime",
      "fieldType": "Integer",
      "javadoc": "是否已完成"
    },
    {
      "fieldName": "closeType",
      "fieldType": "FeedbackCloseType",
      "javadoc": "关闭类型",
      "fieldValues": "NORMALLY,TIMEOUT"
    },
    {
      "fieldName": "createdDate",
      "fieldType": "Instant",
      "javadoc": "创建时间"
    },
    {
      "fieldName": "createdBy",
      "fieldType": "String",
      "javadoc": "创建者"
    }
  ],
  "relationships": [],
  "javadoc": "反馈记录表",
  "entityTableName": "feedback_record",
  "dto": "mapstruct",
  "pagination": "pagination",
  "service": "serviceImpl",
  "jpaMetamodelFiltering": true,
  "fluentMethods": true,
  "readOnly": false,
  "embedded": false,
  "applications": "*",
  "changelogDate": "20210312072243"
}

再次运行实体生成器:

jhipster entity FeedbackRecord

当您为现有实体运行实体子生成器时,系统会询问您“Do you want to update the entity? This will replace the existing files for this entity, all your custom code will be overwritten”(您确定需要更新实体吗?这将替换该实体的现有文件,所有自定义代码将被覆盖),并具有以下选项:

  • Yes, re generate the entity - 这将重新生成您的实体。提示:这可以通过在运行子生成器时传递--regenerate标志来强制执行
  • Yes, add more fields and relationships - 这将需要您回答一些问题,以添加更多字段和关系
  • Yes, remove fields and relationships - 这将需要您回答一些问题,以便从实体中删除现有字段和关系
  • No, exit - 这将存在子生成器而无需更改任何内容

您可能由于以下原因而要更新您的实体:

提示:要立即重新生成所有实体,可以使用以下命令(不提供--force标识会在文件更改时询问覆盖选项)。

  • Linux & Mac: for f inls .jhipster; do jhipster entity ${f%.*} --force ; done
  • Windows: for %f in (.jhipster/*) do jhipster entity %~nf --force

代码示例

文章作者:jqpeng
原文链接: 从Spring框架看设计模式如何灵活使用

Singleton 单例模式

单例模式是确保每个应用程序只存在一个实例的机制。默认情况下,Spring将所有bean创建为单例。

单例模式

你用@Autowired获取的bean,全局唯一。

@RestController
public class LibraryController {
    
    @Autowired
    private BookRepository repository;

    @GetMapping("/count")
    public Long findCount() {
        System.out.println(repository);
        return repository.count();
    }
}

工厂方法模式

Spring 定义了BeanFactory接口,抽象对象容器:

public interface BeanFactory {

    getBean(Class<T> requiredType);
    getBean(Class<T> requiredType, Object... args);
    getBean(String name);

    // ...
]

每一个getBean 方法其实就是一个工厂方法。

代理模式(Proxy)

代理模式

在Spring中,对于事务,我们可以加一个@Transactional注解,

@Service
public class BookManager {
    
    @Autowired
    private BookRepository repository;

    @Transactional
    public Book create(String author) {
        System.out.println(repository.getClass().getName());
        return repository.create(author);
    }
}

Spring框架,通过AOP做Proxy。

Decorator装饰器模式

Spring 中的TransactionAwareCacheDecorator 就做了对Cache 的包装:

public interface Cache {
    String getName();

    Object getNativeCache();

    @Nullable
    Cache.ValueWrapper get(Object var1);

    @Nullable
    <T> T get(Object var1, @Nullable Class<T> var2);

    @Nullable
    <T> T get(Object var1, Callable<T> var2);

    void put(Object var1, @Nullable Object var2);
}

TransactionAwareCacheDecorator 实现了Cache接口,构造时传入一个targetCache,在调用put等方法时,增加了自己装饰逻辑在里面。

public class TransactionAwareCacheDecorator implements Cache {
    private final Cache targetCache;

    public TransactionAwareCacheDecorator(Cache targetCache) {
        Assert.notNull(targetCache, "Target Cache must not be null");
        this.targetCache = targetCache;
    }

    public void put(final Object key, @Nullable final Object value) {
        if (TransactionSynchronizationManager.isSynchronizationActive()) {
            TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
                public void afterCommit() {
                    TransactionAwareCacheDecorator.this.targetCache.put(key, value);
                }
            });
        } else {
            this.targetCache.put(key, value);
        }

    }

最佳实践

  • 装饰模式是继承的有力补充。相比于继承,装饰模式可以增加代码的可维护性、扩展性、复用性。在一些情况下装饰模式可以替代继承,解决类膨胀问题。
  • 装饰模式有利于程序的可扩展性。在一个项目中,有很多因素考虑不周,特别是业务的变更。通过装饰模式重新封装一个装饰类,可以避免修改继承体系中的中间类,而是使用装饰类修饰中间类,这样原有的程序没有变更,通过扩展完成了这次变更。

组合模式(Composite)

Spring actuate 提供HealthIndicator, 用于监控服务健康状态。

@FunctionalInterface
public interface HealthIndicator {

    /**
     * Return an indication of health.
     * @return the health for
     */
    Health health();

}

实现类里,有一个CompositeHealthIndicator, 可以add多个HealthIndicator,放入indicators里,最后返回health时,聚合所有indicatorsHealth

public class CompositeHealthIndicator implements HealthIndicator {

    private final Map<String, HealthIndicator> indicators;

    private final HealthAggregator healthAggregator;

    /**
     * Create a new {@link CompositeHealthIndicator}.
     * @param healthAggregator the health aggregator
     */
    public CompositeHealthIndicator(HealthAggregator healthAggregator) {
        this(healthAggregator, new LinkedHashMap<>());
    }

    /**
     * Create a new {@link CompositeHealthIndicator} from the specified indicators.
     * @param healthAggregator the health aggregator
     * @param indicators a map of {@link HealthIndicator}s with the key being used as an
     * indicator name.
     */
    public CompositeHealthIndicator(HealthAggregator healthAggregator,
            Map<String, HealthIndicator> indicators) {
        Assert.notNull(healthAggregator, "HealthAggregator must not be null");
        Assert.notNull(indicators, "Indicators must not be null");
        this.indicators = new LinkedHashMap<>(indicators);
        this.healthAggregator = healthAggregator;
    }

    public void addHealthIndicator(String name, HealthIndicator indicator) {
        this.indicators.put(name, indicator);
    }

    @Override
    public Health health() {
        Map<String, Health> healths = new LinkedHashMap<>();
        for (Map.Entry<String, HealthIndicator> entry : this.indicators.entrySet()) {
            healths.put(entry.getKey(), entry.getValue().health());
        }
        return this.healthAggregator.aggregate(healths);
    }

}

感谢您的认真阅读。

如果你觉得有帮助,欢迎点赞支持!

不定期分享软件开发经验,欢迎关注作者, 一起交流软件开发:

文章作者:jqpeng
原文链接: 提升NginxTLSSSL HTTPS 性能的7条优化建议

自2018年7月起,谷歌浏览器开始将“ HTTP”网站标记为“不安全”。在过去的几年中,互联网已经迅速过渡到HTTPS,Chrome浏览器的流量超过70%,并且Web排名前100位的网站中有80多个现在默认使用HTTPS 当前Nginx作为最常见的服务器,广泛用于负载均衡(LB)、网关、反向代理。考虑到这一点,让我们看一下Nginx调优技巧,改善Nginx + HTTPS的性能以获得更好的TTFB和更少的延迟。

提升Nginx SSL/HTTPS性能的7条建议

HTTPS 优化

1. 开启 HTTP/2

HTTP/2最初是在Nginx版本1.9.5中实现的,以取代spdy。在Nginx上启用HTTP/2模块很简单。

原先的配置:

listen 443 ssl;

修改为:

listen 443 ssl http2;

可以通过curl来验证:

curl --http2 -I https://domain.com/

2. 开启 SSL session 缓存

启用 SSL Session 缓存可以减少 TLS 的反复验证,减少 TLS 握手。 1M 的内存就可以缓存 4000 个连接,非常划算,现在内存便宜,尽量开启。

ssl_session_cache shared:SSL:50m; # 1m 4000个,
ssl_session_timeout 1h; # 1小时过期 1 hour during which sessions can be re-used.

3. 禁用 SSL session tickets

由于Nginx中尚未实现SSL session tickets,可以关闭。

ssl_session_tickets off;

4. 禁用 TLS version 1.0

1.3已经出来。1.0可以丢进历史垃圾堆

ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

修改为

ssl_protocols TLSv1.2 TLSv1.3;

5. 启用OCSP Stapling

如果不启用 OCSP Stapling 的话,在用户连接你的服务器的时候,需要去验证证书,这个验证证书的时间不可控,我们开启OCSP Stapling后,可以省掉这一步。

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/full_chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;

6. 减小ssl buffer size

ssl_buffer_size 控制在发送数据时的 buffer 大小,默认情况下,缓冲区设置为16k,为了最大程度地减少TTFB(至第一个字节的时间),最好使用较小的值,这样TTFB可以节省大约30 – 50ms。

ssl_buffer_size 4k;

7. 调整 Cipher 优先级

更新更快的 Cipher放前面,这样延迟更小。

# 手动启用 cipher 列表
ssl_prefer_server_ciphers on;  # prefer a list of ciphers to prevent old and slow ciphers
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

文章作者:jqpeng
原文链接: 高效团队的gitlab flow最佳实践

当前git是大部分开发团队的首选版本管理工具,一个好的流程规范可以让大家有效地合作,像流水线一样有条不紊地进行团队协作。

业界包含三种flow:

  • Git flow
  • Github flow
  • Gitlab flow

下面我们先来分析,然后再基于gitlab flow来设计一个适合我们团队的git规范。

从git flow到gitlab flow

git flow

先说git flow,大概是这样的。

gitflow

然后,我们老的git规范是参考git flow实现的。

当前git流程

综合考虑了开发、测试、新功能开发、临时需求、热修复,理想很丰满,现实很骨干,这一套运行起来实在是太复杂了。那么如何精简流程呢?

我们来看业界的做法,首先是github flow。

github flow

Github flow 是Git flow的简化版,专门配合”持续发布”。它是 Github.com 使用的工作流程。

github flow

整个流程:

流程

  • 第一步:根据需求,从master拉出新分支,不区分功能分支或补丁分支。
  • 第二步:新分支开发完成后,或者需要讨论的时候,就向master发起一个pull request(简称PR)。
  • 第三步:Pull Request既是一个通知,让别人注意到你的请求,又是一种对话机制,大家一起评审和讨论你的代码。对话过程中,你还可以不断提交代码。
  • 第四步:你的Pull Request被接受,合并进master,重新部署后,原来你拉出来的那个分支就被删除。(先部署再合并也可。)

github flow这种方式,要保证高质量,对于贡献者的素质要求很高,换句话说,如果代码贡献者素质不那么高,质量就无法得到保证。

github flow这一套对于库、框架、工具这样并非最终应用的产品来说,没问题,但是,如果如果一个产品是“最终应用”,github flow可能就不合适了。

gitlab flow

Gitlab flow 是 Git flow 与 Github flow 的综合。它吸取了两者的优点,既有适应不同开发环境的弹性,又有单一主分支的简单和便利。它是 Gitlab.com 推荐的做法。

Gitlab flow 的最大原则叫做”上游优先”(upsteam first),即只存在一个主分支master,它是所有其他分支的”上游”。只有上游分支采纳的代码变化,才能应用到其他分支。

对于”持续发布”的项目,它建议在master分支以外,再建立不同的环境分支。比如,”开发环境”的分支是master,”预发环境”的分支是pre-production,”生产环境”的分支是production。

gitlab flow

只有紧急情况,才允许跳过上游,直接合并到下游分支。

对于”版本发布”的项目,建议的做法是每一个稳定版本,都要从master分支拉出一个分支,比如2-3-stable、2-4-stable等等。

版本发布

gitlab flow 如何处理hotfix? git flow之所以这么复杂,一大半原因就是把hotfix考虑得太周全了。hotfix的意思是,当代码部署到产品环境之后发现的问题,需要火速fix。gitlab flow 可以基于后续分支,修改后上线。

团队git规范

综合上面的介绍,我们决定采用gitlab flow,按照版本发布的模式实施,具体来说:

  1. 新的迭代开始,所有开发人员从主干master拉个人分支开发特性, 分支命名规范 feature-name
  2. 开发完成后,在迭代结束前,合入master分支
  3. master分支合并后,自动cicd到dev环境
  4. 开发自测通过后,从master拉取要发布的分支,release-$version,将这个分支部署到测试环境进行测试
  5. 测出的bug,通过从release-$versio拉出分支进行修复,修复完成后,再合入release-$versio
  6. 正式发布版本,如果上线后,又有bug,根据5的方式处理
  7. 等发布版本稳定后,将release-$versio反合入主干

最佳实践

开发feature功能

新建分支,比如feat-test

新分支

开发代码,增加新功能,提交:

@GetMapping(path = "/test", produces = "application/json")@ResponseBodypublic Map<String, Object> test() {    return singletonMap("test", "test");}



git commit -m "feat: add test code"
git push origin feat-test

提交MR

提交代码后,可以提交mrmaster,申请合并代码

mr

Note

  • 这里可以增加自动代码审查,

合并代码

研发组长,打开mr,review代码,可以添加建议:

添加评论

开发同学根据建议修复代码,或者线下修改后commit代码。

应用建议

研发组长确认没有问题后,可以合并到master。

合并

合并完成,可以删除feat分支。

新功能开发好,可以进行提测。

发布版本

语义化版本号

版本格式:主版本号.次版本号.修订号,版本号递增规则如下:

主版本号:当你做了不兼容的 API 修改,
次版本号:当你做了向下兼容的功能性新增,
修订号:当你做了向下兼容的问题修正。
先行版本号及版本编译元数据可以加到“主版本号.次版本号.修订号”的后面,作为延伸。

主版本号为0,代表还未发布正式版本。

测试发布

master分支,自动部署到开发环境(dev)

功能开发完成,并自测通过后,代码合并到待发布版本,

分支规则:

release-version

版本规则

主版本号.次版本号

构建时,自动增加修订号:

主版本号.次版本号.修订号

从最新的master新拉一个分支release-$version,比如release-0.1

git checkout -b release-0.1

release-$version会自动构建,版本号为$version.$buildNumber

设定release-$version 分支为保护分支,不允许直接推送,只能通过merge不允许直接提交代码,接受MR

bug修复

需要修改bug时,从release-$version新拉分支,修改完成后再合并到release-$version分支.

  • Q: 从release-$version拉的分支,如何测试?
  • A: 这个节点定义为bug修复节点,建议开发同学优先本地测试验证,严重通过再合并到release分支。
  • Q: release-$version太多怎么办?
  • A: 可以保留最近的10个版本。历史的打tag后,删除分支。

感谢您的认真阅读。

如果你觉得有帮助,欢迎点赞支持!

不定期分享软件开发经验,欢迎关注作者, 一起交流软件开发:

文章作者:jqpeng
原文链接: Spring中的@Valid 和 @Validated注解你用对了吗

1.概述

本文我们将重点介绍Spring中 @Valid@Validated注解的区别 。

验证用户输入是否正确是我们应用程序中的常见功能。Spring提供了@Valid和@Validated两个注解来实现验证功能,下面我们来详细介绍它们。

2. @Valid和@Validate注解

在Spring中,我们使用@Valid 注解进行方法级别验证,同时还能用它来标记成员属性以进行验证。

但是,此注释不支持分组验证。@Validated则支持分组验证。

3.例子

让我们考虑一个使用Spring Boot开发的简单用户注册表单。首先,我们只有名称密码属性:

public class UserAccount {

    @NotNull
    @Size(min = 4, max = 15)
    private String password;

    @NotBlank
    private String name;

    // standard constructors / setters / getters / toString

}

接下来,让我们看一下控制器。在这里,我们将使用带有@Valid批注的saveBasicInfo方法来验证用户输入:

@RequestMapping(value = "/saveBasicInfo", method = RequestMethod.POST)
public String saveBasicInfo(
  @Valid @ModelAttribute("useraccount") UserAccount useraccount, 
  BindingResult result, 
  ModelMap model) {
    if (result.hasErrors()) {
        return "error";
    }
    return "success";
}

现在让我们测试一下这个方法:

@Test
public void givenSaveBasicInfo_whenCorrectInput`thenSuccess() throws Exception {
    this.mockMvc.perform(MockMvcRequestBuilders.post("/saveBasicInfo")
      .accept(MediaType.TEXT_HTML)
      .param("name", "test123")
      .param("password", "pass"))
      .andExpect(view().name("success"))
      .andExpect(status().isOk())
      .andDo(print());
}

在确认测试成功运行之后,现在让我们扩展功能。下一步的逻辑步骤是将其转换为多步骤注册表格,就像大多数向导一样。第一步,名称密码保持不变。在第二步中,我们将获取其他信息,例如age 和 phone。因此,我们将使用以下其他字段更新域对象:

public class UserAccount {

    @NotNull
    @Size(min = 4, max = 15)
    private String password;

    @NotBlank
    private String name;

    @Min(value = 18, message = "Age should not be less than 18")
    private int age;

    @NotBlank
    private String phone;

    // standard constructors / setters / getters / toString   

}

但是,这一次,我们将注意到先前的测试失败。这是因为我们没有传递年龄电话字段。

为了支持此行为,我们引入支持分组验证的@Validated批注。

分组验证,就是将字段分组,分别验证,比如我们将用户信息分为两组:BasicInfoAdvanceInfo

可以建立两个空接口:

public interface BasicInfo {
}



public interface AdvanceInfo {
}

第一步将具有BasicInfo接口,第二步 将具有AdvanceInfo  。此外,我们将更新UserAccount类以使用这些标记接口,如下所示:

public class UserAccount {

    @NotNull(groups = BasicInfo.class)
    @Size(min = 4, max = 15, groups = BasicInfo.class)
    private String password;

    @NotBlank(groups = BasicInfo.class)
    private String name;

    @Min(value = 18, message = "Age should not be less than 18", groups = AdvanceInfo.class)
    private int age;

    @NotBlank(groups = AdvanceInfo.class)
    private String phone;

    // standard constructors / setters / getters / toString   

}

另外,我们现在将更新控制器以使用@Validated注释而不是@Valid

@RequestMapping(value = "/saveBasicInfoStep1", method = RequestMethod.POST)
public String saveBasicInfoStep1(
  @Validated(BasicInfo.class) 
  @ModelAttribute("useraccount") UserAccount useraccount, 
  BindingResult result, ModelMap model) {
    if (result.hasErrors()) {
        return "error";
    }
    return "success";
}

更新后,再次执行测试,现在可以成功运行。现在,我们还要测试这个新方法:

@Test
public void givenSaveBasicInfoStep1`whenCorrectInput`thenSuccess() throws Exception {
    this.mockMvc.perform(MockMvcRequestBuilders.post("/saveBasicInfoStep1")
      .accept(MediaType.TEXT_HTML)
      .param("name", "test123")
      .param("password", "pass"))
      .andExpect(view().name("success"))
      .andExpect(status().isOk())
      .andDo(print());
}

也成功运行!

接下来,让我们看看@Valid对于触发嵌套属性验证是必不可少的。

4.使用@Valid批注标记嵌套对象

@Valid 可以用于嵌套对象。例如,在我们当前的场景中,让我们创建一个 UserAddress 对象:

public class UserAddress {

    @NotBlank
    private String countryCode;

    // standard constructors / setters / getters / toString
}

为了确保验证此嵌套对象,我们将使用@Valid批注装饰属性:

public class UserAccount {

    //...

    @Valid
    @NotNull(groups = AdvanceInfo.class)
    private UserAddress useraddress;

    // standard constructors / setters / getters / toString 
}

5. 总结

@Valid保证了整个对象的验证, 但是它是对整个对象进行验证,当仅需要部分验证的时候就会出现问题。 这时候,可以使用@Validated 进行分组验证。

参考


作者:Jadepeng
出处:jqpeng的技术记事本–http://www.cnblogs.com/xiaoqi
您的支持是对博主最大的鼓励,感谢您的认真阅读。
本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

文章作者:jqpeng
原文链接: hugegraph 存取数据解析

hugegraph 是百度开源的图数据库,支持hbase,mysql,rocksdb等作为存储后端。本文以EDGE 存储,hbase为存储后端,来探索hugegraph是如何存取数据的。

存数据

序列化

Edge

首先需要序列化,hbase 使用BinarySerializer:

  • keyWithIdPrefix 和indexWithIdPrefix都是false

这个后面会用到。

public class HbaseSerializer extends BinarySerializer {

    public HbaseSerializer() {
        super(false, true);
    }
}

要存到db,首先需要序列化为BackendEntry,BackendEntry 是图数据库和后端存储的传输对象,Hbase对应的是BinaryBackendEntry:

public class BinaryBackendEntry implements BackendEntry {

    private static final byte[] EMPTY_BYTES = new byte[]{};

    private final HugeType type;
    private final BinaryId id;
    private Id subId;
    private final List<BackendColumn> columns;
    private long ttl;

    public BinaryBackendEntry(HugeType type, byte[] bytes) {
        this(type, BytesBuffer.wrap(bytes).parseId(type));
    }

    public BinaryBackendEntry(HugeType type, BinaryId id) {
        this.type = type;
        this.id = id;
        this.subId = null;
        this.columns = new ArrayList<>();
        this.ttl = 0L;
    }

我们来看序列化,序列化,其实就是要将数据放到entry的column列里。

  • hbasekeyWithIdPrefix是false,因此name不包含ownerVertexId(参考下面的EdgeId,去掉ownerVertexId)
 public BackendEntry writeEdge(HugeEdge edge) {
        BinaryBackendEntry entry = newBackendEntry(edge);
        byte[] name = this.keyWithIdPrefix ?
                      this.formatEdgeName(edge) : EMPTY_BYTES;
        byte[] value = this.formatEdgeValue(edge);
        entry.column(name, value);

        if (edge.hasTtl()) {
            entry.ttl(edge.ttl());
        }

        return entry;
    }

EdgeId:

    private final Id ownerVertexId;
    private final Directions direction;
    private final Id edgeLabelId;
    private final String sortValues;
    private final Id otherVertexId;

    private final boolean directed;
    private String cache;

backend 存储

生成BackendEntry后,通过store机制,交给后端的backend存储。

EDGE的保存,对应HbaseTables.Edge:

public static class Edge extends HbaseTable {

        @Override
        public void insert(Session session, BackendEntry entry) {
            long ttl = entry.ttl();
            if (ttl == 0L) {
                session.put(this.table(), CF, entry.id().asBytes(),
                            entry.columns());
            } else {
                session.put(this.table(), CF, entry.id().asBytes(),
                            entry.columns(), ttl);
            }
        }
}

CF 是固定的f:

    protected static final byte[] CF = "f".getBytes();

session.put 对应:

 @Override
        public void put(String table, byte[] family, byte[] rowkey,
                        Collection<BackendColumn> columns) {
            Put put = new Put(rowkey);
            for (BackendColumn column : columns) {
                put.addColumn(family, column.name, column.value);
            }
            this.batch(table, put);
        }

可以看出,存储时,edgeid作为rowkey,然后把去除ownerVertexId后的edgeid作为column.name

EDGE 读取

从backend读取BackendEntry

读取就是从hbase读取result,转换为BinaryBackendEntry,再转成Edge。

读取,是scan的过程:

 /**
         * Inner scan: send scan request to HBase and get iterator
         */
        @Override
        public RowIterator scan(String table, Scan scan) {
            assert !this.hasChanges();

            try (Table htable = table(table)) {
                return new RowIterator(htable.getScanner(scan));
            } catch (IOException e) {
                throw new BackendException(e);
            }
        }

scan后,返回BackendEntryIterator

protected BackendEntryIterator newEntryIterator(Query query,
                                                    RowIterator rows) {
        return new BinaryEntryIterator<>(rows, query, (entry, row) -> {
            E.checkState(!row.isEmpty(), "Can't parse empty HBase result");
            byte[] id = row.getRow();
            if (entry == null || !Bytes.prefixWith(id, entry.id().asBytes())) {
                HugeType type = query.resultType();
                // NOTE: only support BinaryBackendEntry currently
                entry = new BinaryBackendEntry(type, id);
            }
            try {
                this.parseRowColumns(row, entry, query);
            } catch (IOException e) {
                throw new BackendException("Failed to read HBase columns", e);
            }
            return entry;
        });
    }

注意,new BinaryBackendEntry(type, id) 时,BinaryBackendEntry的id并不是rowkey,而是对rowkey做了处理:

public BinaryId parseId(HugeType type) {
        if (type.isIndex()) {
            return this.readIndexId(type);
        }
        // Parse id from bytes
        int start = this.buffer.position();
        /*
         * Since edge id in edges table doesn't prefix with leading 0x7e,
         * so readId() will return the source vertex id instead of edge id,
         * can't call: type.isEdge() ? this.readEdgeId() : this.readId();
         */
        Id id = this.readId();
        int end = this.buffer.position();
        int len = end - start;
        byte[] bytes = new byte[len];
        System.arraycopy(this.array(), start, bytes, 0, len);
        return new BinaryId(bytes, id);
    }

这里是先读取ownervertexId作为Id部分, 然后将剩余的直接放入bytes,组合成BinaryId,和序列化的时候有差别,为什么这么设计呢?原来不管是vertex还是edge,都是当成Vertex来读取的。

protected final BinaryBackendEntry newBackendEntry(HugeEdge edge) {
        BinaryId id = new BinaryId(formatEdgeName(edge),
                                   edge.idWithDirection());
        return newBackendEntry(edge.type(), id);
    }

public EdgeId directed(boolean directed) {
    return new EdgeId(this.ownerVertexId, this.direction, this.edgeLabelId,
                      this.sortValues, this.otherVertexId, directed);
}

序列化的时候是EdgeId

BackendEntryIterator迭代器支持对结果进行merge, 上面代码里的!Bytes.prefixWith(id, entry.id().asBytes())) 就是对比是否是同一个ownervertex,如果是同一个,则放到同一个BackendEntry的Columns里。

     public BinaryEntryIterator(BackendIterator<Elem> results, Query query,
                               BiFunction<BackendEntry, Elem, BackendEntry> m)

    @Override
    protected final boolean fetch() {
        assert this.current == null;
        if (this.next != null) {
            this.current = this.next;
            this.next = null;
        }

        while (this.results.hasNext()) {
            Elem elem = this.results.next();
            BackendEntry merged = this.merger.apply(this.current, elem);
            E.checkState(merged != null, "Error when merging entry");
            if (this.current == null) {
                // The first time to read
                this.current = merged;
            } else if (merged == this.current) {
                // The next entry belongs to the current entry
                assert this.current != null;
                if (this.sizeOf(this.current) >= INLINE_BATCH_SIZE) {
                    break;
                }
            } else {
                // New entry
                assert this.next == null;
                this.next = merged;
                break;
            }

            // When limit exceed, stop fetching
            if (this.reachLimit(this.fetched() - 1)) {
                // Need remove last one because fetched limit + 1 records
                this.removeLastRecord();
                this.results.close();
                break;
            }
        }

        return this.current != null;
    }

从BackendEntry转换为edge

然后再来看读取数据readVertex,前面说了,就算是edge,其实也是当vertex来读取的:

 @Override
    public HugeVertex readVertex(HugeGraph graph, BackendEntry bytesEntry) {
        if (bytesEntry == null) {
            return null;
        }
        BinaryBackendEntry entry = this.convertEntry(bytesEntry);

        // Parse id
        Id id = entry.id().origin();
        Id vid = id.edge() ? ((EdgeId) id).ownerVertexId() : id;
        HugeVertex vertex = new HugeVertex(graph, vid, VertexLabel.NONE);

        // Parse all properties and edges of a Vertex
        for (BackendColumn col : entry.columns()) {
            if (entry.type().isEdge()) {
                // NOTE: the entry id type is vertex even if entry type is edge
                // Parse vertex edges
                this.parseColumn(col, vertex);
            } else {
                assert entry.type().isVertex();
                // Parse vertex properties
                assert entry.columnsSize() == 1 : entry.columnsSize();
                this.parseVertex(col.value, vertex);
            }
        }

        return vertex;
    }

逻辑:

  • 先读取ownervertexid,生成HugeVertex,这个时候只知道id,不知道vertexlabel,所以设置为VertexLabel.NONE
  • 然后,读取BackendColumn,一个edge,一个Column(name是edgeid去除ownervertexid后的部分,value是边数据)

读取是在parseColumn:

protected void parseColumn(BackendColumn col, HugeVertex vertex) {
        BytesBuffer buffer = BytesBuffer.wrap(col.name);
        Id id = this.keyWithIdPrefix ? buffer.readId() : vertex.id();
        E.checkState(buffer.remaining() > 0, "Missing column type");
        byte type = buffer.read();
        // Parse property
        if (type == HugeType.PROPERTY.code()) {
            Id pkeyId = buffer.readId();
            this.parseProperty(pkeyId, BytesBuffer.wrap(col.value), vertex);
        }
        // Parse edge
        else if (type == HugeType.EDGE_IN.code() ||
                 type == HugeType.EDGE_OUT.code()) {
            this.parseEdge(col, vertex, vertex.graph());
        }
        // Parse system property
        else if (type == HugeType.SYS_PROPERTY.code()) {
            // pass
        }
        // Invalid entry
        else {
            E.checkState(false, "Invalid entry(%s) with unknown type(%s): 0x%s",
                         id, type & 0xff, Bytes.toHex(col.name));
        }
    }

从``col.name`读取type,如果是edge,则parseEdge:

protected void parseEdge(BackendColumn col, HugeVertex vertex,
                             HugeGraph graph) {
        // owner-vertex + dir + edge-label + sort-values + other-vertex

        BytesBuffer buffer = BytesBuffer.wrap(col.name);
        if (this.keyWithIdPrefix) {
            // Consume owner-vertex id
            buffer.readId();
        }
        byte type = buffer.read();
        Id labelId = buffer.readId();
        String sortValues = buffer.readStringWithEnding();
        Id otherVertexId = buffer.readId();

        boolean direction = EdgeId.isOutDirectionFromCode(type);
        EdgeLabel edgeLabel = graph.edgeLabelOrNone(labelId);

        // Construct edge
        HugeEdge edge = HugeEdge.constructEdge(vertex, direction, edgeLabel,
                                               sortValues, otherVertexId);

        // Parse edge-id + edge-properties
        buffer = BytesBuffer.wrap(col.value);

        //Id id = buffer.readId();

        // Parse edge properties
        this.parseProperties(buffer, edge);

        // Parse edge expired time if needed
        if (edge.hasTtl()) {
            this.parseExpiredTime(buffer, edge);
        }
    }

从col.name依次读取出type,labelId,sortValues和otherVertexId:

        byte type = buffer.read();
        Id labelId = buffer.readId();
        String sortValues = buffer.readStringWithEnding();
        Id otherVertexId = buffer.readId();

然后根据labelid找到 EdgeLabel edgeLabel = graph.edgeLabelOrNone(labelId);

创建edge, 解析边属性parseProperties

最后读取Ttl, 处理结果的时候,会过滤过期数据。