Compare commits
777 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
66e3da5d2f | ||
|
|
5f2adfe74e | ||
|
|
cecfde906a | ||
|
|
258385dc9a | ||
|
|
65238231f0 | ||
|
|
cb22e02fbe | ||
|
|
aa0bec1206 | ||
|
|
793c780015 | ||
|
|
ec6f063450 | ||
|
|
f25c65b98b | ||
|
|
2d99aae779 | ||
|
|
a8847dc282 | ||
|
|
4852c01c88 | ||
|
|
3d6f405b69 | ||
|
|
18e3fbf41d | ||
|
|
ae8cc3092b | ||
|
|
5c26e8947b | ||
|
|
fbe6945d3b | ||
|
|
7dc8f2dc48 | ||
|
|
91c60ce72c | ||
|
|
687eea80c8 | ||
|
|
9bfe3fd1db | ||
|
|
03f81bc6de | ||
|
|
eed9571ffa | ||
|
|
e4651ef749 | ||
|
|
f715cf7a8d | ||
|
|
fad9ddb9a1 | ||
|
|
b6e4f50849 | ||
|
|
5c6911e398 | ||
|
|
a0371ab88b | ||
|
|
fa2abadc25 | ||
|
|
f03460f3cd | ||
|
|
b5683b73c2 | ||
|
|
c062586c7e | ||
|
|
98a5c7b776 | ||
|
|
e204023b1f | ||
|
|
4c5ffccc45 | ||
|
|
fbcf58e19c | ||
|
|
e5c6d00438 | ||
|
|
ab6a4d7099 | ||
|
|
78b2b8a45e | ||
|
|
add2af4f3f | ||
|
|
235c0ed30e | ||
|
|
5bd93aa478 | ||
|
|
f95be2c1b3 | ||
|
|
5110b30f62 | ||
|
|
861faa5df5 | ||
|
|
efdf624c67 | ||
|
|
caccf9cef5 | ||
|
|
6ba3dceb84 | ||
|
|
9b7c41e804 | ||
|
|
346aee8fe7 | ||
|
|
353d781bca | ||
|
|
3ce4bf231a | ||
|
|
d046cb8bf4 | ||
|
|
da95c63503 | ||
|
|
915e48de22 | ||
|
|
256f770971 | ||
|
|
16e251cbe8 | ||
|
|
67743b859a | ||
|
|
c275b42632 | ||
|
|
a02760417b | ||
|
|
0e50bfc5d4 | ||
|
|
eab988e18f | ||
|
|
dd6004b9d4 | ||
|
|
ac7c32acd5 | ||
|
|
f4a219ceef | ||
|
|
a8b56fb613 | ||
|
|
2925a20e8e | ||
|
|
6b3eb05735 | ||
|
|
17e0c39f83 | ||
|
|
4994639111 | ||
|
|
c187b5246f | ||
|
|
6ed6d5ec8a | ||
|
|
0735b332a8 | ||
|
|
344cec19fe | ||
|
|
6ef365e201 | ||
|
|
edfa6a9f71 | ||
|
|
860d0b92e2 | ||
|
|
5bceed7105 | ||
|
|
44a2fe0398 | ||
|
|
218459ad1b | ||
|
|
7db757bc12 | ||
|
|
896a943587 | ||
|
|
cd2c388e68 | ||
|
|
4543a339b7 | ||
|
|
1c4fbef9f2 | ||
|
|
b2f0f69365 | ||
|
|
c4fb18a73c | ||
|
|
5cad7b4106 | ||
|
|
f3c4133cd2 | ||
|
|
d9c59cb3d3 | ||
|
|
7a0db7161b | ||
|
|
6aefc16fa0 | ||
|
|
186dcd07e0 | ||
|
|
e8652d5db5 | ||
|
|
fb5964af84 | ||
|
|
249fe7c700 | ||
|
|
cc2a590b33 | ||
|
|
5b3f3e5575 | ||
|
|
36cf285397 | ||
|
|
4386563c2c | ||
|
|
0123ce4a5a | ||
|
|
c3d47d3093 | ||
|
|
9735c4f885 | ||
|
|
3a3141a361 | ||
|
|
ac30436324 | ||
|
|
7176e418f5 | ||
|
|
ca794f507e | ||
|
|
0f8be4fadc | ||
|
|
7066246e8f | ||
|
|
7d1bb48b59 | ||
|
|
dd0d519677 | ||
|
|
4293d05fca | ||
|
|
2c82baf9fc | ||
|
|
921161d6d0 | ||
|
|
e632c6c13f | ||
|
|
5833a8644c | ||
|
|
fab41e892f | ||
|
|
7a52cf67b0 | ||
|
|
175b8d643a | ||
|
|
6241eb052a | ||
|
|
c2fd0a8410 | ||
|
|
5127b600ec | ||
|
|
feb03aede6 | ||
|
|
47b6c5d86a | ||
|
|
c4a81613f4 | ||
|
|
daeb5c4cec | ||
|
|
38def45ad6 | ||
|
|
4b29a2fdfd | ||
|
|
a165ecaeef | ||
|
|
6637ba4ccc | ||
|
|
2f807eec2b | ||
|
|
636c2c6a83 | ||
|
|
898a55c703 | ||
|
|
8ffe7e7101 | ||
|
|
7661826ea5 | ||
|
|
e456be91ef | ||
|
|
da0a97cabf | ||
|
|
c1031a492a | ||
|
|
3c8aaf528c | ||
|
|
70ff20a2b0 | ||
|
|
6918f4babe | ||
|
|
805a704d34 | ||
|
|
c69c289bc4 | ||
|
|
dd5869e246 | ||
|
|
b51ffb81a3 | ||
|
|
ed0efd6bd2 | ||
|
|
39d2fe6195 | ||
|
|
7471d05c20 | ||
|
|
3492688733 | ||
|
|
a603783615 | ||
|
|
5c9096d564 | ||
|
|
c27786a257 | ||
|
|
81910d1958 | ||
|
|
55d5fc4bde | ||
|
|
f30586b150 | ||
|
|
37037c19f0 | ||
|
|
1a5e2c7309 | ||
|
|
941dd4fd65 | ||
|
|
5f6df3681c | ||
|
|
7d045dbf05 | ||
|
|
4ff4accdc3 | ||
|
|
bbe967c4a8 | ||
|
|
b101cec6fa | ||
|
|
e98ec562a2 | ||
|
|
0e71ecc587 | ||
|
|
0f11a65df8 | ||
|
|
da00c8c877 | ||
|
|
8b177877bb | ||
|
|
ea199dca8d | ||
|
|
88b5833f77 | ||
|
|
127b5be651 | ||
|
|
80f001cdd5 | ||
|
|
30d297cae1 | ||
|
|
a96853db90 | ||
|
|
c1502152c0 | ||
|
|
afda292796 | ||
|
|
163cab78ae | ||
|
|
8f4ff36c09 | ||
|
|
47b6b3577a | ||
|
|
f3eca3b214 | ||
|
|
62f7d3f72f | ||
|
|
26e60d8a64 | ||
|
|
df655a250c | ||
|
|
811fc9b400 | ||
|
|
83df02783c | ||
|
|
6a5efce874 | ||
|
|
fa0ae5e474 | ||
|
|
cafd665a2d | ||
|
|
e8f77a456b | ||
|
|
4510c62ebd | ||
|
|
79864955e1 | ||
|
|
ff26a8d46c | ||
|
|
cc226d552e | ||
|
|
962f89475b | ||
|
|
ec204a1605 | ||
|
|
58d7623938 | ||
|
|
8f4ecfcdc0 | ||
|
|
ef719cedbc | ||
|
|
b7856c892b | ||
|
|
7435a78883 | ||
|
|
f49206b316 | ||
|
|
7d500a0721 | ||
|
|
98a519f20b | ||
|
|
39b655bb43 | ||
|
|
78d56a49fe | ||
|
|
d2e9d1fa01 | ||
|
|
41ff914dc3 | ||
|
|
3ba447fac2 | ||
|
|
e9cc380a2e | ||
|
|
017cac9bbe | ||
|
|
9ad72694af | ||
|
|
e8f9821870 | ||
|
|
bb167b9f8d | ||
|
|
28fbb5e130 | ||
|
|
16101e81e8 | ||
|
|
aced504d2a | ||
|
|
abb064d9d1 | ||
|
|
dc1899a1cd | ||
|
|
442f34278c | ||
|
|
a6dcbcd35b | ||
|
|
2b600e96eb | ||
|
|
177bb80f31 | ||
|
|
63fbe728c4 | ||
|
|
b33020840b | ||
|
|
c5caf7c0d6 | ||
|
|
0f0473db4c | ||
|
|
beadde3e06 | ||
|
|
a423a20480 | ||
|
|
79f0a23813 | ||
|
|
780fdea2cc | ||
|
|
1c0fda1adf | ||
|
|
9cf13e9b30 | ||
|
|
87cd058fd8 | ||
|
|
81b1ec48c2 | ||
|
|
66dd82f4fd | ||
|
|
ce35b23911 | ||
|
|
e79342acf5 | ||
|
|
3fc9f39d24 | ||
|
|
0221fb3a4a | ||
|
|
f009f8b7ba | ||
|
|
b76959431a | ||
|
|
975370b593 | ||
|
|
7275030971 | ||
|
|
99b0be5a95 | ||
|
|
edd3f95fc4 | ||
|
|
479f983b09 | ||
|
|
7650332252 | ||
|
|
8f1a021851 | ||
|
|
ce4df4d5fd | ||
|
|
bd43ae1b5d | ||
|
|
8fa34116b9 | ||
|
|
7e92553017 | ||
|
|
b7e243a693 | ||
|
|
35d4888afb | ||
|
|
b3e8a4f0f6 | ||
|
|
321125caee | ||
|
|
e01427aa4f | ||
|
|
14652e7f7a | ||
|
|
7c05899dbd | ||
|
|
56726b703f | ||
|
|
6237b0182f | ||
|
|
be5b662f65 | ||
|
|
224698355c | ||
|
|
8f47138ecd | ||
|
|
d159746391 | ||
|
|
63df93ea5e | ||
|
|
38948c0daa | ||
|
|
6c610427b6 | ||
|
|
b4cc31c459 | ||
|
|
7d781712c9 | ||
|
|
dd61ce9b2a | ||
|
|
69a7212986 | ||
|
|
ff05a951fd | ||
|
|
89d5357b40 | ||
|
|
7ca3d65c42 | ||
|
|
7b5c2d800f | ||
|
|
f414b47a78 | ||
|
|
44f4e2f0f9 | ||
|
|
2361008bdf | ||
|
|
7377ef3ec5 | ||
|
|
a28d064b7a | ||
|
|
e2e57e8575 | ||
|
|
9d90bd2835 | ||
|
|
7445e68df4 | ||
|
|
ab42625ad2 | ||
|
|
18789a0a53 | ||
|
|
68a37bb56a | ||
|
|
3b33652c47 | ||
|
|
1e0c4c3904 | ||
|
|
04e223de16 | ||
|
|
c4a691aa8a | ||
|
|
ff9dde163a | ||
|
|
eb7efbd1a5 | ||
|
|
8c8c362c54 | ||
|
|
66e119ad5d | ||
|
|
6dedc04a05 | ||
|
|
0cf8bad0df | ||
|
|
95c9582d8b | ||
|
|
7815126ff5 | ||
|
|
a5fa9de54b | ||
|
|
95f1a2c630 | ||
|
|
1e256ae1fd | ||
|
|
9fc9c54fa1 | ||
|
|
1b362b1e02 | ||
|
|
04e3172cca | ||
|
|
1caab7f3f7 | ||
|
|
9d33c725ad | ||
|
|
6ed1d38106 | ||
|
|
0f07ddedaf | ||
|
|
289945b471 | ||
|
|
f331a6d144 | ||
|
|
0c8c12a651 | ||
|
|
028c3bb2fa | ||
|
|
d7a5a0d405 | ||
|
|
5ef5f6e531 | ||
|
|
1d205734b3 | ||
|
|
5edd43884f | ||
|
|
c1992373bc | ||
|
|
ed562f9c8a | ||
|
|
b4d44ef8c7 | ||
|
|
ad0c16a1b4 | ||
|
|
7eabe66853 | ||
|
|
3983d73695 | ||
|
|
161d4c4562 | ||
|
|
9a1e89564e | ||
|
|
0c18c5b4f6 | ||
|
|
3e12ba34f7 | ||
|
|
e71e29391b | ||
|
|
9b7b9a7af0 | ||
|
|
a23819c308 | ||
|
|
6cb1825d96 | ||
|
|
77b8c758dc | ||
|
|
e5a582cfad | ||
|
|
ec83db267e | ||
|
|
bfd026cae7 | ||
|
|
35f1dd8082 | ||
|
|
7ed0e7dd23 | ||
|
|
1a3cbf7a9d | ||
|
|
d9e4abc3de | ||
|
|
a4186085d3 | ||
|
|
26b1846bb4 | ||
|
|
1aa89527a6 | ||
|
|
eac76d7ad0 | ||
|
|
cea0cd56f6 | ||
|
|
c4b897f282 | ||
|
|
47389dbabb | ||
|
|
a2f8b1a851 | ||
|
|
feac0a058f | ||
|
|
27eeac9fd4 | ||
|
|
a14db4b194 | ||
|
|
54ee271a47 | ||
|
|
a3a9be4f7f | ||
|
|
d4f0a832f3 | ||
|
|
7dc533372c | ||
|
|
1737d87713 | ||
|
|
dbb98dea11 | ||
|
|
802b382b36 | ||
|
|
fc82999d45 | ||
|
|
08aa000c07 | ||
|
|
39015b5100 | ||
|
|
0d635ad419 | ||
|
|
9133205915 | ||
|
|
725ac10c3d | ||
|
|
2b76358c8f | ||
|
|
833c360698 | ||
|
|
7da1e67b01 | ||
|
|
7eb86a47dd | ||
|
|
d67e383c28 | ||
|
|
8749d3e1f5 | ||
|
|
30fba21c48 | ||
|
|
d83d35aee9 | ||
|
|
1d3caeea7d | ||
|
|
c8806dbb4d | ||
|
|
e5802c7f50 | ||
|
|
590f684d66 | ||
|
|
8e5a67f565 | ||
|
|
8d2fbce11e | ||
|
|
26916f6632 | ||
|
|
fbfa0d2d2a | ||
|
|
e626b99090 | ||
|
|
203859b71b | ||
|
|
9a25c22f3a | ||
|
|
0a03f41a7c | ||
|
|
56191939c8 | ||
|
|
beb754aaaa | ||
|
|
f234f740ca | ||
|
|
e14679694c | ||
|
|
e06712397e | ||
|
|
b6c6df7ffc | ||
|
|
375c6f56c9 | ||
|
|
0bf85c97b5 | ||
|
|
630e582321 | ||
|
|
a89fe23bdd | ||
|
|
a7a5fa9a31 | ||
|
|
c73a7eee2f | ||
|
|
121f8468d5 | ||
|
|
7b0b6936e0 | ||
|
|
597ea04a96 | ||
|
|
f7f90aeaaa | ||
|
|
227479f695 | ||
|
|
6477fb3fe0 | ||
|
|
4223f4f3c4 | ||
|
|
7288874d72 | ||
|
|
68f76f2daf | ||
|
|
fe6ddebc49 | ||
|
|
12b5acd073 | ||
|
|
a6f1fe07b3 | ||
|
|
85e3f2a946 | ||
|
|
d4f416de14 | ||
|
|
0d9a6702c1 | ||
|
|
d11285cdbf | ||
|
|
5f1f33d2b9 | ||
|
|
474daf752d | ||
|
|
27d1b92690 | ||
|
|
993afa4c19 | ||
|
|
028d891c32 | ||
|
|
0df55ec22d | ||
|
|
579f64774d | ||
|
|
792f8d939d | ||
|
|
e4fb02fcda | ||
|
|
0c14c641d0 | ||
|
|
dba671fd1e | ||
|
|
80d1693722 | ||
|
|
26014a11b2 | ||
|
|
848fddd55a | ||
|
|
97f5f05f1a | ||
|
|
25b82810f2 | ||
|
|
9b1e506fa7 | ||
|
|
7a42996e97 | ||
|
|
dbfcebcf67 | ||
|
|
37c3f69a28 | ||
|
|
5d412890b4 | ||
|
|
1e318a4c40 | ||
|
|
d4549176ec | ||
|
|
61efdf492f | ||
|
|
67ea4d44c8 | ||
|
|
fdae05a4aa | ||
|
|
5efb837ee8 | ||
|
|
584b626d93 | ||
|
|
de25a4ed8e | ||
|
|
2e852e5ca6 | ||
|
|
b11000715a | ||
|
|
b3f8b46f0f | ||
|
|
8d22a0664a | ||
|
|
20756a3453 | ||
|
|
c9b4d45a64 | ||
|
|
83f7f5468b | ||
|
|
59c042ad67 | ||
|
|
d550fc5068 | ||
|
|
6effba69a0 | ||
|
|
9b46956259 | ||
|
|
b5a4a732da | ||
|
|
487862367e | ||
|
|
5b63b9ce67 | ||
|
|
afbcd3e1df | ||
|
|
12b82c1395 | ||
|
|
863b765e0d | ||
|
|
731429c51c | ||
|
|
66f3bc61fe | ||
|
|
4efe35dd51 | ||
|
|
c92461ef93 | ||
|
|
405e6e0c1d | ||
|
|
0d227aef49 | ||
|
|
0e49002f42 | ||
|
|
2e016800e0 | ||
|
|
09f317b991 | ||
|
|
5a48cb1547 | ||
|
|
f632febf33 | ||
|
|
3c53467943 | ||
|
|
d358c0f4f7 | ||
|
|
de977a5b32 | ||
|
|
703d685d59 | ||
|
|
31a5f17408 | ||
|
|
c40ae3c455 | ||
|
|
b71a34279e | ||
|
|
8f8c0c4eda | ||
|
|
3a384f0e34 | ||
|
|
cf7bc11cbd | ||
|
|
be60ae8399 | ||
|
|
8e50d145d5 | ||
|
|
7a3d15525c | ||
|
|
64f32d8b24 | ||
|
|
949d6ba605 | ||
|
|
ceb8db09f4 | ||
|
|
ed05a0ebb8 | ||
|
|
a7cbb76655 | ||
|
|
93cbfa0b1f | ||
|
|
6120613a98 | ||
|
|
dbd00db159 | ||
|
|
befde952f5 | ||
|
|
1aa759e5be | ||
|
|
2de27719c1 | ||
|
|
21db57b537 | ||
|
|
dfe8d09477 | ||
|
|
90dfa22c64 | ||
|
|
0f35427645 | ||
|
|
7909f60ff8 | ||
|
|
9a1a8a4c30 | ||
|
|
fa7ad64140 | ||
|
|
8a0c23339d | ||
|
|
e7ab3aff16 | ||
|
|
d0948797b9 | ||
|
|
04a5e17451 | ||
|
|
47065c8042 | ||
|
|
488c778736 | ||
|
|
d10a7bcc75 | ||
|
|
afe44a2537 | ||
|
|
9eadafe850 | ||
|
|
dab3eefcc0 | ||
|
|
2b9a6b28d8 | ||
|
|
465f98ca2b | ||
|
|
a0312be4fd | ||
|
|
4a5161372b | ||
|
|
4c9921f752 | ||
|
|
6dd72d40ee | ||
|
|
db49c234bb | ||
|
|
4a9df0c4d9 | ||
|
|
461573c2ba | ||
|
|
291992753f | ||
|
|
fcefe7ac38 | ||
|
|
7da712fcff | ||
|
|
2fd8687624 | ||
|
|
639b1f8336 | ||
|
|
ab3b83e42a | ||
|
|
4818629c40 | ||
|
|
61784c860a | ||
|
|
d5667254f2 | ||
|
|
af2b93983f | ||
|
|
8281301cbd | ||
|
|
0043ab8371 | ||
|
|
500eaace82 | ||
|
|
28e8540c78 | ||
|
|
69adf682e2 | ||
|
|
69cd1ff6e1 | ||
|
|
415d67cc32 | ||
|
|
46a2fec79b | ||
|
|
560b322fca | ||
|
|
effe17ac85 | ||
|
|
7699acfc1b | ||
|
|
6e058240b3 | ||
|
|
f005c6bc44 | ||
|
|
7be462599f | ||
|
|
271ab432d9 | ||
|
|
4114777a4e | ||
|
|
9189a54442 | ||
|
|
b95ee762e3 | ||
|
|
9e3c4dc06b | ||
|
|
1891a3ac86 | ||
|
|
9ecdcac06d | ||
|
|
790cb6a2e1 | ||
|
|
4a98e5f025 | ||
|
|
507abc1d84 | ||
|
|
9b732fbbad | ||
|
|
220f1c6fc3 | ||
|
|
7a950c67b6 | ||
|
|
78f625dc8c | ||
|
|
211d26a3ed | ||
|
|
dce2bc6326 | ||
|
|
90e5d7f6f0 | ||
|
|
71d4e0f9e6 | ||
|
|
580b4534e0 | ||
|
|
fc835e09c6 | ||
|
|
c6e782a637 | ||
|
|
1ddfbfc833 | ||
|
|
dbf637fe0f | ||
|
|
110e129622 | ||
|
|
677e9d1b54 | ||
|
|
ad2adb905e | ||
|
|
5e9de7ac14 | ||
|
|
c63fb8380c | ||
|
|
2d39acc224 | ||
|
|
e68358e05f | ||
|
|
a96f10edf0 | ||
|
|
f03d94935b | ||
|
|
9c1320cd95 | ||
|
|
4f2ae588a5 | ||
|
|
eff51034b7 | ||
|
|
18832dc448 | ||
|
|
5262ae8907 | ||
|
|
7f251679fa | ||
|
|
5f5920b427 | ||
|
|
65a16d058a | ||
|
|
a73484d23a | ||
|
|
47887a20c6 | ||
|
|
9465c6f198 | ||
|
|
c09872c8c2 | ||
|
|
b0501cc80d | ||
|
|
f0792db6b3 | ||
|
|
e1514c901b | ||
|
|
e90c5003ae | ||
|
|
92a0d5d52c | ||
|
|
8912cb5323 | ||
|
|
d008c19149 | ||
|
|
e844b6444a | ||
|
|
02606cdce2 | ||
|
|
0081720f0e | ||
|
|
cca1e92868 | ||
|
|
69b774a074 | ||
|
|
5656b03fb4 | ||
|
|
02d0dcbb7f | ||
|
|
7b2e06df12 | ||
|
|
4259ae63d7 | ||
|
|
d7b11803bc | ||
|
|
fed298a6d4 | ||
|
|
51832385b1 | ||
|
|
462303fca0 | ||
|
|
4405703e42 | ||
|
|
23e398e121 | ||
|
|
b17bb89d04 | ||
|
|
5590cebf8f | ||
|
|
1fa043f09d | ||
|
|
3bd0af1451 | ||
|
|
1545962745 | ||
|
|
d032571681 | ||
|
|
33fb0acc7e | ||
|
|
1ec68a91e2 | ||
|
|
a23c113a46 | ||
|
|
371ae2c0a5 | ||
|
|
8f8f6ffa27 | ||
|
|
475fe0d91f | ||
|
|
3d74e60d03 | ||
|
|
83ac83bb28 | ||
|
|
8478fb857c | ||
|
|
7074bdaa9f | ||
|
|
58164294cc | ||
|
|
7c0e9df156 | ||
|
|
bd62212ecb | ||
|
|
2292039b42 | ||
|
|
73f8da8d5a | ||
|
|
e51dbe0ca7 | ||
|
|
482a375e31 | ||
|
|
689c5ce455 | ||
|
|
734a020ecc | ||
|
|
44d537f78c | ||
|
|
b4c60eb910 | ||
|
|
e120b32375 | ||
|
|
de54966d30 | ||
|
|
39a6302c18 | ||
|
|
05ceeea4b0 | ||
|
|
9f8e3373a8 | ||
|
|
42521cbae4 | ||
|
|
b23c35197e | ||
|
|
70f28d9ac4 | ||
|
|
912d73d98a | ||
|
|
2a720fce6f | ||
|
|
e4534c359f | ||
|
|
b91bec15f2 | ||
|
|
67ad5cacb7 | ||
|
|
b4a739476a | ||
|
|
a7bf2085db | ||
|
|
c3802cf48b | ||
|
|
54711c4491 | ||
|
|
fcb52a69c0 | ||
|
|
1b632f9754 | ||
|
|
73d7a0ecdc | ||
|
|
08943593b3 | ||
|
|
c949a88f20 | ||
|
|
a49c11f655 | ||
|
|
a66aed4a88 | ||
|
|
0045c953a0 | ||
|
|
fdce41b451 | ||
|
|
4d5e4d0f00 | ||
|
|
82c9b6481e | ||
|
|
053d4dcb18 | ||
|
|
e1b2c442aa | ||
|
|
0ed8ba8ca4 | ||
|
|
f195847c68 | ||
|
|
5beb13b17e | ||
|
|
7d9ec05062 | ||
|
|
fc604a9eaf | ||
|
|
4f3c1ad9b6 | ||
|
|
6d45ed586c | ||
|
|
1afb633b4f | ||
|
|
34d9f9174b | ||
|
|
3b0c208eff | ||
|
|
05022f8db4 | ||
|
|
3336de457a | ||
|
|
10a27bc29c | ||
|
|
542e5d3c2d | ||
|
|
7372617b14 | ||
|
|
89735a130b | ||
|
|
859cf74bd6 | ||
|
|
e2744ab399 | ||
|
|
16bd065098 | ||
|
|
71c52e6dd7 | ||
|
|
a7f8c3ced3 | ||
|
|
f3f0432c65 | ||
|
|
426ba2d150 | ||
|
|
2790099efa | ||
|
|
f6ba8bc95e | ||
|
|
d6181522c0 | ||
|
|
04cf071ca6 | ||
|
|
e4371b5d02 | ||
|
|
52c52b2a0d | ||
|
|
8f40f10575 | ||
|
|
fe0f6fcd0b | ||
|
|
31b1ad8bb4 | ||
|
|
373680d854 | ||
|
|
9e3bc80495 | ||
|
|
52ccaeffd5 | ||
|
|
18136c12fd | ||
|
|
dec3f9e75e | ||
|
|
ccc0ee4d18 | ||
|
|
69e9708080 | ||
|
|
5944ba099a | ||
|
|
ada2718b5e | ||
|
|
1f87bd63e7 | ||
|
|
c0f3259cf6 | ||
|
|
e1d5749a40 | ||
|
|
a8d7eb27d9 | ||
|
|
1eecdf3829 | ||
|
|
be8b345889 | ||
|
|
074da389b3 | ||
|
|
4df2dc09fe | ||
|
|
e8d42ba074 | ||
|
|
c036483680 | ||
|
|
2818584db6 | ||
|
|
37585f760d | ||
|
|
f5477a03a1 | ||
|
|
50388425b2 | ||
|
|
725c59eab0 | ||
|
|
7bf1de29a4 | ||
|
|
d90c3fc7dd | ||
|
|
80785ce072 | ||
|
|
44ea896de8 | ||
|
|
d30cb8a0f0 | ||
|
|
6c7b333b34 | ||
|
|
6d34a00e77 | ||
|
|
1f353e10ce | ||
|
|
4e10f8d1c5 | ||
|
|
a22cd853fc | ||
|
|
354e0d6a87 | ||
|
|
dfabe28645 | ||
|
|
fce230da48 | ||
|
|
055ba9bda6 | ||
|
|
ec19c3b4dd | ||
|
|
37aa526404 | ||
|
|
86c1faa40f | ||
|
|
8dcf15d0f9 | ||
|
|
6835e1e680 | ||
|
|
d8f89b8f67 | ||
|
|
ec28eba781 | ||
|
|
5ef8fff5bc | ||
|
|
4f317b76fa | ||
|
|
61672637dc | ||
|
|
ecf6e8f664 | ||
|
|
4115975320 | ||
|
|
21904a8609 | ||
|
|
10b0a3dabb | ||
|
|
b2091e9aed | ||
|
|
f2cb5bd77c | ||
|
|
19c61c52e6 | ||
|
|
b327359183 | ||
|
|
9e9bb72e17 | ||
|
|
a23907e009 | ||
|
|
ad131f5a2c | ||
|
|
dbeae4ca68 | ||
|
|
0fb0e94848 | ||
|
|
95d2a82d35 | ||
|
|
39cccd568e | ||
|
|
19b7f6ad8c | ||
|
|
41c000cf47 | ||
|
|
1b8ea61e87 | ||
|
|
4538593236 | ||
|
|
8086ef355b | ||
|
|
60d038fe46 | ||
|
|
ff0f4463be | ||
|
|
820571d993 | ||
|
|
fffc0c3add | ||
|
|
022f9eb551 | ||
|
|
6e7b82cfcb | ||
|
|
b5fb24b360 | ||
|
|
b77345222c | ||
|
|
793e81406e | ||
|
|
cef1ec95d2 |
51
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
name: 报告Bug
|
||||||
|
about: 报告KnowStreaming的相关Bug
|
||||||
|
title: ''
|
||||||
|
labels: bug
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
||||||
|
|
||||||
|
你是否希望来认领这个Bug。
|
||||||
|
|
||||||
|
「 Y / N 」
|
||||||
|
|
||||||
|
### 环境信息
|
||||||
|
|
||||||
|
* KnowStreaming version : <font size=4 color =red> xxx </font>
|
||||||
|
* Operating System version : <font size=4 color =red> xxx </font>
|
||||||
|
* Java version : <font size=4 color =red> xxx </font>
|
||||||
|
|
||||||
|
|
||||||
|
### 重现该问题的步骤
|
||||||
|
|
||||||
|
1. xxx
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
2. xxx
|
||||||
|
|
||||||
|
|
||||||
|
3. xxx
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### 预期结果
|
||||||
|
|
||||||
|
<!-- 写下应该出现的预期结果?-->
|
||||||
|
|
||||||
|
### 实际结果
|
||||||
|
|
||||||
|
<!-- 实际发生了什么? -->
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
如果有异常,请附上异常Trace:
|
||||||
|
|
||||||
|
```
|
||||||
|
Just put your stack trace here!
|
||||||
|
```
|
||||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
blank_issues_enabled: true
|
||||||
|
contact_links:
|
||||||
|
- name: 讨论问题
|
||||||
|
url: https://github.com/didi/KnowStreaming/discussions/new
|
||||||
|
about: 发起问题、讨论 等等
|
||||||
|
- name: KnowStreaming官网
|
||||||
|
url: https://knowstreaming.com/
|
||||||
|
about: KnowStreaming website
|
||||||
26
.github/ISSUE_TEMPLATE/detail_optimizing.md
vendored
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
---
|
||||||
|
name: 优化建议
|
||||||
|
about: 相关功能优化建议
|
||||||
|
title: ''
|
||||||
|
labels: Optimization Suggestions
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
||||||
|
|
||||||
|
你是否希望来认领这个优化建议。
|
||||||
|
|
||||||
|
「 Y / N 」
|
||||||
|
|
||||||
|
### 环境信息
|
||||||
|
|
||||||
|
* KnowStreaming version : <font size=4 color =red> xxx </font>
|
||||||
|
* Operating System version : <font size=4 color =red> xxx </font>
|
||||||
|
* Java version : <font size=4 color =red> xxx </font>
|
||||||
|
|
||||||
|
### 需要优化的功能点
|
||||||
|
|
||||||
|
|
||||||
|
### 建议如何优化
|
||||||
|
|
||||||
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
name: 提议新功能/需求
|
||||||
|
about: 给KnowStreaming提一个功能需求
|
||||||
|
title: ''
|
||||||
|
labels: feature
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [ ] 我在 [issues](https://github.com/didi/KnowStreaming/issues) 中并未搜索到与此相关的功能需求。
|
||||||
|
- [ ] 我在 [release note](https://github.com/didi/KnowStreaming/releases) 已经发布的版本中并没有搜到相关功能.
|
||||||
|
|
||||||
|
你是否希望来认领这个Feature。
|
||||||
|
|
||||||
|
「 Y / N 」
|
||||||
|
|
||||||
|
|
||||||
|
## 这里描述需求
|
||||||
|
<!--请尽可能的描述清楚您的需求 -->
|
||||||
|
|
||||||
12
.github/ISSUE_TEMPLATE/question.md
vendored
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: 提个问题
|
||||||
|
about: 问KnowStreaming相关问题
|
||||||
|
title: ''
|
||||||
|
labels: question
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
- [ ] 我已经在 [issues](https://github.com/didi/KnowStreaming/issues) 搜索过相关问题了,并没有重复的。
|
||||||
|
|
||||||
|
## 在这里提出你的问题
|
||||||
23
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
请不要在没有先创建Issue的情况下创建Pull Request。
|
||||||
|
|
||||||
|
## 变更的目的是什么
|
||||||
|
|
||||||
|
XXXXX
|
||||||
|
|
||||||
|
## 简短的更新日志
|
||||||
|
|
||||||
|
XX
|
||||||
|
|
||||||
|
## 验证这一变化
|
||||||
|
|
||||||
|
XXXX
|
||||||
|
|
||||||
|
请遵循此清单,以帮助我们快速轻松地整合您的贡献:
|
||||||
|
|
||||||
|
* [ ] 一个 PR(Pull Request的简写)只解决一个问题,禁止一个 PR 解决多个问题;
|
||||||
|
* [ ] 确保 PR 有对应的 Issue(通常在您开始处理之前创建),除非是书写错误之类的琐碎更改不需要 Issue ;
|
||||||
|
* [ ] 格式化 PR 及 Commit-Log 的标题及内容,例如 #861 。PS:Commit-Log 需要在 Git Commit 代码时进行填写,在 GitHub 上修改不了;
|
||||||
|
* [ ] 编写足够详细的 PR 描述,以了解 PR 的作用、方式和原因;
|
||||||
|
* [ ] 编写必要的单元测试来验证您的逻辑更正。如果提交了新功能或重大更改,请记住在 test 模块中添加 integration-test;
|
||||||
|
* [ ] 确保编译通过,集成测试通过;
|
||||||
|
|
||||||
229
.gitignore
vendored
@@ -1,113 +1,116 @@
|
|||||||
### Intellij ###
|
### Intellij ###
|
||||||
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
|
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
|
||||||
|
|
||||||
*.iml
|
*.iml
|
||||||
|
|
||||||
## Directory-based project format:
|
## Directory-based project format:
|
||||||
.idea/
|
.idea/
|
||||||
# if you remove the above rule, at least ignore the following:
|
# if you remove the above rule, at least ignore the following:
|
||||||
|
|
||||||
# User-specific stuff:
|
# User-specific stuff:
|
||||||
# .idea/workspace.xml
|
# .idea/workspace.xml
|
||||||
# .idea/tasks.xml
|
# .idea/tasks.xml
|
||||||
# .idea/dictionaries
|
# .idea/dictionaries
|
||||||
# .idea/shelf
|
# .idea/shelf
|
||||||
|
|
||||||
# Sensitive or high-churn files:
|
# Sensitive or high-churn files:
|
||||||
.idea/dataSources.ids
|
.idea/dataSources.ids
|
||||||
.idea/dataSources.xml
|
.idea/dataSources.xml
|
||||||
.idea/sqlDataSources.xml
|
.idea/sqlDataSources.xml
|
||||||
.idea/dynamic.xml
|
.idea/dynamic.xml
|
||||||
.idea/uiDesigner.xml
|
.idea/uiDesigner.xml
|
||||||
|
|
||||||
|
|
||||||
# Mongo Explorer plugin:
|
# Mongo Explorer plugin:
|
||||||
.idea/mongoSettings.xml
|
.idea/mongoSettings.xml
|
||||||
|
|
||||||
## File-based project format:
|
## File-based project format:
|
||||||
*.ipr
|
*.ipr
|
||||||
*.iws
|
*.iws
|
||||||
|
|
||||||
## Plugin-specific files:
|
## Plugin-specific files:
|
||||||
|
|
||||||
# IntelliJ
|
# IntelliJ
|
||||||
/out/
|
/out/
|
||||||
|
|
||||||
# mpeltonen/sbt-idea plugin
|
# mpeltonen/sbt-idea plugin
|
||||||
.idea_modules/
|
.idea_modules/
|
||||||
|
|
||||||
# JIRA plugin
|
# JIRA plugin
|
||||||
atlassian-ide-plugin.xml
|
atlassian-ide-plugin.xml
|
||||||
|
|
||||||
# Crashlytics plugin (for Android Studio and IntelliJ)
|
# Crashlytics plugin (for Android Studio and IntelliJ)
|
||||||
com_crashlytics_export_strings.xml
|
com_crashlytics_export_strings.xml
|
||||||
crashlytics.properties
|
crashlytics.properties
|
||||||
crashlytics-build.properties
|
crashlytics-build.properties
|
||||||
fabric.properties
|
fabric.properties
|
||||||
|
|
||||||
|
|
||||||
### Java ###
|
### Java ###
|
||||||
*.class
|
*.class
|
||||||
|
|
||||||
# Mobile Tools for Java (J2ME)
|
# Mobile Tools for Java (J2ME)
|
||||||
.mtj.tmp/
|
.mtj.tmp/
|
||||||
|
|
||||||
# Package Files #
|
# Package Files #
|
||||||
*.jar
|
*.jar
|
||||||
*.war
|
*.war
|
||||||
*.ear
|
*.ear
|
||||||
*.tar.gz
|
*.tar.gz
|
||||||
|
|
||||||
# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
|
# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
|
||||||
hs_err_pid*
|
hs_err_pid*
|
||||||
|
|
||||||
|
|
||||||
### OSX ###
|
### OSX ###
|
||||||
.DS_Store
|
.DS_Store
|
||||||
.AppleDouble
|
.AppleDouble
|
||||||
.LSOverride
|
.LSOverride
|
||||||
|
|
||||||
# Icon must end with two \r
|
# Icon must end with two \r
|
||||||
Icon
|
Icon
|
||||||
|
|
||||||
|
|
||||||
# Thumbnails
|
# Thumbnails
|
||||||
._*
|
._*
|
||||||
|
|
||||||
# Files that might appear in the root of a volume
|
# Files that might appear in the root of a volume
|
||||||
.DocumentRevisions-V100
|
.DocumentRevisions-V100
|
||||||
.fseventsd
|
.fseventsd
|
||||||
.Spotlight-V100
|
.Spotlight-V100
|
||||||
.TemporaryItems
|
.TemporaryItems
|
||||||
.Trashes
|
.Trashes
|
||||||
.VolumeIcon.icns
|
.VolumeIcon.icns
|
||||||
|
|
||||||
# Directories potentially created on remote AFP share
|
# Directories potentially created on remote AFP share
|
||||||
.AppleDB
|
.AppleDB
|
||||||
.AppleDesktop
|
.AppleDesktop
|
||||||
Network Trash Folder
|
Network Trash Folder
|
||||||
Temporary Items
|
Temporary Items
|
||||||
.apdisk
|
.apdisk
|
||||||
|
|
||||||
/target
|
/target
|
||||||
target/
|
target/
|
||||||
*.log
|
*.log
|
||||||
*.log.*
|
*.log.*
|
||||||
*.bak
|
*.bak
|
||||||
*.vscode
|
*.vscode
|
||||||
*/.vscode/*
|
*/.vscode/*
|
||||||
*/.vscode
|
*/.vscode
|
||||||
*/velocity.log*
|
*/velocity.log*
|
||||||
*/*.log
|
*/*.log
|
||||||
*/*.log.*
|
*/*.log.*
|
||||||
node_modules/
|
node_modules/
|
||||||
node_modules/*
|
node_modules/*
|
||||||
workspace.xml
|
workspace.xml
|
||||||
/output/*
|
/output/*
|
||||||
.gitversion
|
.gitversion
|
||||||
node_modules/*
|
out/*
|
||||||
out/*
|
dist/
|
||||||
dist/
|
dist/*
|
||||||
dist/*
|
km-rest/src/main/resources/templates/
|
||||||
kafka-manager-web/src/main/resources/templates/
|
*dependency-reduced-pom*
|
||||||
.DS_Store
|
#filter flattened xml
|
||||||
|
*/.flattened-pom.xml
|
||||||
|
.flattened-pom.xml
|
||||||
|
*/*/.flattened-pom.xml
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
# Contribution Guideline
|
|
||||||
|
|
||||||
Thanks for considering to contribute this project. All issues and pull requests are highly appreciated.
|
|
||||||
|
|
||||||
## Pull Requests
|
|
||||||
|
|
||||||
Before sending pull request to this project, please read and follow guidelines below.
|
|
||||||
|
|
||||||
1. Branch: We only accept pull request on `dev` branch.
|
|
||||||
2. Coding style: Follow the coding style used in kafka-manager.
|
|
||||||
3. Commit message: Use English and be aware of your spell.
|
|
||||||
4. Test: Make sure to test your code.
|
|
||||||
|
|
||||||
Add device mode, API version, related log, screenshots and other related information in your pull request if possible.
|
|
||||||
|
|
||||||
NOTE: We assume all your contribution can be licensed under the [Apache License 2.0](LICENSE).
|
|
||||||
|
|
||||||
## Issues
|
|
||||||
|
|
||||||
We love clearly described issues. :)
|
|
||||||
|
|
||||||
Following information can help us to resolve the issue faster.
|
|
||||||
|
|
||||||
* Device mode and hardware information.
|
|
||||||
* API version.
|
|
||||||
* Logs.
|
|
||||||
* Screenshots.
|
|
||||||
* Steps to reproduce the issue.
|
|
||||||
BIN
KS-PRD-3.0-beta1.docx
Normal file
BIN
KS-PRD-3.0-beta2.docx
Normal file
BIN
KS-PRD-3.1-ZK.docx
Normal file
BIN
KS-PRD-3.2-Connect.docx
Normal file
BIN
KS-PRD-3.3-MM2.docx
Normal file
102
README.md
@@ -1,102 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
阅读本README文档,您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息,并通过体验地址,快速体验Kafka集群指标监控与运维管控的全流程。
|
|
||||||
|
|
||||||
|
|
||||||
## 1 产品简介
|
|
||||||
滴滴Logi-KafkaManager脱胎于滴滴内部多年的Kafka运营实践经验,是面向Kafka用户、Kafka运维人员打造的共享多租户Kafka云平台。专注于Kafka运维管控、监控告警、资源治理等核心场景,经历过大规模集群、海量大数据的考验。内部满意度高达90%的同时,还与多家知名企业达成商业化合作。
|
|
||||||
|
|
||||||
### 1.1 快速体验地址
|
|
||||||
|
|
||||||
- 体验地址 http://117.51.150.133:8080 账号密码 admin/admin
|
|
||||||
|
|
||||||
### 1.2 体验地图
|
|
||||||
相比较于同类产品的用户视角单一(大多为管理员视角),滴滴Logi-KafkaManager建立了基于分角色、多场景视角的体验地图。分别是:**用户体验地图、运维体验地图、运营体验地图**
|
|
||||||
|
|
||||||
#### 1.2.1 用户体验地图
|
|
||||||
- 平台租户申请 :申请应用(App)作为Kafka中的用户名,并用 AppID+password作为身份验证
|
|
||||||
- 集群资源申请 :按需申请、按需使用。可使用平台提供的共享集群,也可为应用申请独立的集群
|
|
||||||
- Topic 申 请 :可根据应用(App)创建Topic,或者申请其他topic的读写权限
|
|
||||||
- Topic 运 维 :Topic数据采样、调整配额、申请分区等操作
|
|
||||||
- 指 标 监 控 :基于Topic生产消费各环节耗时统计,监控不同分位数性能指标
|
|
||||||
- 消 费 组 运 维 :支持将消费偏移重置至指定时间或指定位置
|
|
||||||
|
|
||||||
#### 1.2.2 运维体验地图
|
|
||||||
- 多版本集群管控 :支持从`0.10.2`到`2.x`版本
|
|
||||||
- 集 群 监 控 :集群Topic、Broker等多维度历史与实时关键指标查看,建立健康分体系
|
|
||||||
- 集 群 运 维 :划分部分Broker作为Region,使用Region定义资源划分单位,并按照业务、保障能力区分逻辑集群
|
|
||||||
- Broker 运 维 :包括优先副本选举等操作
|
|
||||||
- Topic 运 维 :包括创建、查询、扩容、修改属性、迁移、下线等
|
|
||||||
|
|
||||||
|
|
||||||
#### 1.2.3 运营体验地图
|
|
||||||
- 资 源 治 理 :沉淀资源治理方法。针对Topic分区热点、分区不足等高频常见问题,沉淀资源治理方法,实现资源治理专家化
|
|
||||||
- 资 源 审 批 :工单体系。Topic创建、调整配额、申请分区等操作,由专业运维人员审批,规范资源使用,保障平台平稳运行
|
|
||||||
- 账 单 体 系 :成本控制。Topic资源、集群资源按需申请、按需使用。根据流量核算费用,帮助企业建设大数据成本核算体系
|
|
||||||
|
|
||||||
### 1.3 核心优势
|
|
||||||
- 高 效 的 问 题 定 位 :监控多项核心指标,统计不同分位数据,提供种类丰富的指标监控报表,帮助用户、运维人员快速高效定位问题
|
|
||||||
- 便 捷 的 集 群 运 维 :按照Region定义集群资源划分单位,将逻辑集群根据保障等级划分。在方便资源隔离、提高扩展能力的同时,实现对服务端的强管控
|
|
||||||
- 专 业 的 资 源 治 理 :基于滴滴内部多年运营实践,沉淀资源治理方法,建立健康分体系。针对Topic分区热点、分区不足等高频常见问题,实现资源治理专家化
|
|
||||||
- 友 好 的 运 维 生 态 :与滴滴夜莺监控告警系统打通,集成监控告警、集群部署、集群升级等能力。形成运维生态,凝练专家服务,使运维更高效
|
|
||||||
|
|
||||||
### 1.4 滴滴Logi-KafkaManager架构图
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
## 2 相关文档
|
|
||||||
|
|
||||||
### 2.1 产品文档
|
|
||||||
- [滴滴Logi-KafkaManager 安装手册](docs/install_guide/install_guide_cn.md)
|
|
||||||
- [滴滴Logi-KafkaManager 接入集群](docs/user_guide/add_cluster/add_cluster.md)
|
|
||||||
- [滴滴Logi-KafkaManager 用户使用手册](docs/user_guide/user_guide_cn.md)
|
|
||||||
- [滴滴Logi-KafkaManager FAQ](docs/user_guide/faq.md)
|
|
||||||
|
|
||||||
### 2.2 社区文章
|
|
||||||
- [滴滴云官网产品介绍](https://www.didiyun.com/production/logi-KafkaManager.html)
|
|
||||||
- [7年沉淀之作--滴滴Logi日志服务套件](https://mp.weixin.qq.com/s/-KQp-Qo3WKEOc9wIR2iFnw)
|
|
||||||
- [滴滴Logi-KafkaManager 一站式Kafka监控与管控平台](https://mp.weixin.qq.com/s/9qSZIkqCnU6u9nLMvOOjIQ)
|
|
||||||
- [滴滴Logi-KafkaManager 开源之路](https://xie.infoq.cn/article/0223091a99e697412073c0d64)
|
|
||||||
- [滴滴Logi-KafkaManager 系列视频教程](https://mp.weixin.qq.com/s/9X7gH0tptHPtfjPPSdGO8g)
|
|
||||||
- [kafka实践(十五):滴滴开源Kafka管控平台 Logi-KafkaManager研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244)
|
|
||||||
- [kafka的灵魂伴侣Logi-KafkaManager系列文章专栏 --石臻](https://blog.csdn.net/u010634066/category_10977588.html)
|
|
||||||
|
|
||||||
## 3 滴滴Logi开源用户交流群
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||
微信加群:添加mike_zhangliang的微信号备注Logi加群或关注公众号 云原生可观测性 回复 "Logi加群"
|
|
||||||
|
|
||||||
## 4 知识星球
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
✅知识星球首个【Kafka中文社区】,内测期免费加入~https://z.didi.cn/5gSF9
|
|
||||||
有问必答~!
|
|
||||||
互动有礼~!
|
|
||||||
1600+群友一起共建国内最专业的【Kafka中文社区】
|
|
||||||
PS:提问请尽量把问题一次性描述清楚,并告知环境信息情况哦~!如使用版本、操作步骤、报错/警告信息等,方便嘉宾们快速解答~
|
|
||||||
|
|
||||||
## 5 项目成员
|
|
||||||
|
|
||||||
### 5.1 内部核心人员
|
|
||||||
|
|
||||||
`iceyuhui`、`liuyaguang`、`limengmonty`、`zhangliangmike`、`nullhuangyiming`、`zengqiao`、`eilenexuzhe`、`huangjiaweihjw`、`zhaoyinrui`、`marzkonglingxu`、`joysunchao`
|
|
||||||
|
|
||||||
|
|
||||||
### 5.2 外部贡献者
|
|
||||||
|
|
||||||
`fangjunyu`、`zhoutaiyang`
|
|
||||||
|
|
||||||
|
|
||||||
## 6 协议
|
|
||||||
|
|
||||||
`LogiKM`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE)
|
|
||||||
@@ -1,141 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## v2.4.1+
|
|
||||||
|
|
||||||
版本上线时间:2021-05-21
|
|
||||||
|
|
||||||
### 能力提升
|
|
||||||
- 增加直接增加权限和配额的接口(v2.4.1)
|
|
||||||
- 增加接口调用可绕过登录的功能(v2.4.1)
|
|
||||||
|
|
||||||
### 体验优化
|
|
||||||
- tomcat 版本提升至8.5.66(v2.4.2)
|
|
||||||
- op接口优化,拆分util接口为topic、leader两类接口(v2.4.1)
|
|
||||||
- 简化Gateway配置的Key长度(v2.4.1)
|
|
||||||
|
|
||||||
### bug修复
|
|
||||||
- 修复页面展示版本错误问题(v2.4.2)
|
|
||||||
|
|
||||||
|
|
||||||
## v2.4.0
|
|
||||||
|
|
||||||
版本上线时间:2021-05-18
|
|
||||||
|
|
||||||
|
|
||||||
### 能力提升
|
|
||||||
|
|
||||||
- 增加App与Topic自动化审批开关
|
|
||||||
- Broker元信息中增加Rack信息
|
|
||||||
- 升级MySQL 驱动,支持MySQL 8+
|
|
||||||
- 增加操作记录查询界面
|
|
||||||
|
|
||||||
### 体验优化
|
|
||||||
|
|
||||||
- FAQ告警组说明优化
|
|
||||||
- 用户手册共享及 独享集群概念优化
|
|
||||||
- 用户管理界面,前端限制用户删除自己
|
|
||||||
|
|
||||||
### bug修复
|
|
||||||
|
|
||||||
- 修复op-util类中创建Topic失败的接口
|
|
||||||
- 周期同步Topic到DB的任务修复,将Topic列表查询从缓存调整为直接查DB
|
|
||||||
- 应用下线审批失败的功能修复,将权限为0(无权限)的数据进行过滤
|
|
||||||
- 修复登录及权限绕过的漏洞
|
|
||||||
- 修复研发角色展示接入集群、暂停监控等按钮的问题
|
|
||||||
|
|
||||||
|
|
||||||
## v2.3.0
|
|
||||||
|
|
||||||
版本上线时间:2021-02-08
|
|
||||||
|
|
||||||
|
|
||||||
### 能力提升
|
|
||||||
|
|
||||||
- 新增支持docker化部署
|
|
||||||
- 可指定Broker作为候选controller
|
|
||||||
- 可新增并管理网关配置
|
|
||||||
- 可获取消费组状态
|
|
||||||
- 增加集群的JMX认证
|
|
||||||
|
|
||||||
### 体验优化
|
|
||||||
|
|
||||||
- 优化编辑用户角色、修改密码的流程
|
|
||||||
- 新增consumerID的搜索功能
|
|
||||||
- 优化“Topic连接信息”、“消费组重置消费偏移”、“修改Topic保存时间”的文案提示
|
|
||||||
- 在相应位置增加《资源申请文档》链接
|
|
||||||
|
|
||||||
### bug修复
|
|
||||||
|
|
||||||
- 修复Broker监控图表时间轴展示错误的问题
|
|
||||||
- 修复创建夜莺监控告警规则时,使用的告警周期的单位不正确的问题
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## v2.2.0
|
|
||||||
|
|
||||||
版本上线时间:2021-01-25
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 能力提升
|
|
||||||
|
|
||||||
- 优化工单批量操作流程
|
|
||||||
- 增加获取Topic75分位/99分位的实时耗时数据
|
|
||||||
- 增加定时任务,可将无主未落DB的Topic定期写入DB
|
|
||||||
|
|
||||||
### 体验优化
|
|
||||||
|
|
||||||
- 在相应位置增加《集群接入文档》链接
|
|
||||||
- 优化物理集群、逻辑集群含义
|
|
||||||
- 在Topic详情页、Topic扩分区操作弹窗增加展示Topic所属Region的信息
|
|
||||||
- 优化Topic审批时,Topic数据保存时间的配置流程
|
|
||||||
- 优化Topic/应用申请、审批时的错误提示文案
|
|
||||||
- 优化Topic数据采样的操作项文案
|
|
||||||
- 优化运维人员删除Topic时的提示文案
|
|
||||||
- 优化运维人员删除Region的删除逻辑与提示文案
|
|
||||||
- 优化运维人员删除逻辑集群的提示文案
|
|
||||||
- 优化上传集群配置文件时的文件类型限制条件
|
|
||||||
|
|
||||||
### bug修复
|
|
||||||
|
|
||||||
- 修复填写应用名称时校验特殊字符出错的问题
|
|
||||||
- 修复普通用户越权访问应用详情的问题
|
|
||||||
- 修复由于Kafka版本升级,导致的数据压缩格式无法获取的问题
|
|
||||||
- 修复删除逻辑集群或Topic之后,界面依旧展示的问题
|
|
||||||
- 修复进行Leader rebalance操作时执行结果重复提示的问题
|
|
||||||
|
|
||||||
|
|
||||||
## v2.1.0
|
|
||||||
|
|
||||||
版本上线时间:2020-12-19
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 体验优化
|
|
||||||
|
|
||||||
- 优化页面加载时的背景样式
|
|
||||||
- 优化普通用户申请Topic权限的流程
|
|
||||||
- 优化Topic申请配额、申请分区的权限限制
|
|
||||||
- 优化取消Topic权限的文案提示
|
|
||||||
- 优化申请配额表单的表单项名称
|
|
||||||
- 优化重置消费偏移的操作流程
|
|
||||||
- 优化创建Topic迁移任务的表单内容
|
|
||||||
- 优化Topic扩分区操作的弹窗界面样式
|
|
||||||
- 优化集群Broker监控可视化图表样式
|
|
||||||
- 优化创建逻辑集群的表单内容
|
|
||||||
- 优化集群安全协议的提示文案
|
|
||||||
|
|
||||||
### bug修复
|
|
||||||
|
|
||||||
- 修复偶发性重置消费偏移失败的问题
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
FROM openjdk:16-jdk-alpine3.13
|
|
||||||
|
|
||||||
LABEL author="fengxsong"
|
|
||||||
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && apk add --no-cache tini
|
|
||||||
|
|
||||||
ENV VERSION 2.4.2
|
|
||||||
WORKDIR /opt/
|
|
||||||
|
|
||||||
ENV AGENT_HOME /opt/agent/
|
|
||||||
COPY docker-depends/config.yaml $AGENT_HOME
|
|
||||||
COPY docker-depends/jmx_prometheus_javaagent-0.15.0.jar $AGENT_HOME
|
|
||||||
|
|
||||||
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.15.0.jar=9999:$AGENT_HOME/config.yaml"
|
|
||||||
ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M "
|
|
||||||
ENV JAVA_OPTS="-verbose:gc \
|
|
||||||
-XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \
|
|
||||||
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport"
|
|
||||||
|
|
||||||
RUN wget https://github.com/didi/Logi-KafkaManager/releases/download/v${VERSION}/kafka-manager-${VERSION}.tar.gz && \
|
|
||||||
tar xvf kafka-manager-${VERSION}.tar.gz && \
|
|
||||||
mv kafka-manager-${VERSION}/kafka-manager.jar /opt/app.jar && \
|
|
||||||
rm -rf kafka-manager-${VERSION}*
|
|
||||||
|
|
||||||
EXPOSE 8080 9999
|
|
||||||
|
|
||||||
ENTRYPOINT ["tini", "--"]
|
|
||||||
|
|
||||||
CMD [ "sh", "-c", "java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS app.jar --spring.config.location=application.yml"]
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
---
|
|
||||||
startDelaySeconds: 0
|
|
||||||
ssl: false
|
|
||||||
lowercaseOutputName: false
|
|
||||||
lowercaseOutputLabelNames: false
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
# Patterns to ignore when building packages.
|
|
||||||
# This supports shell glob matching, relative path matching, and
|
|
||||||
# negation (prefixed with !). Only one pattern per line.
|
|
||||||
.DS_Store
|
|
||||||
# Common VCS dirs
|
|
||||||
.git/
|
|
||||||
.gitignore
|
|
||||||
.bzr/
|
|
||||||
.bzrignore
|
|
||||||
.hg/
|
|
||||||
.hgignore
|
|
||||||
.svn/
|
|
||||||
# Common backup files
|
|
||||||
*.swp
|
|
||||||
*.bak
|
|
||||||
*.tmp
|
|
||||||
*.orig
|
|
||||||
*~
|
|
||||||
# Various IDEs
|
|
||||||
.project
|
|
||||||
.idea/
|
|
||||||
*.tmproj
|
|
||||||
.vscode/
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
dependencies:
|
|
||||||
- name: mysql
|
|
||||||
repository: https://charts.bitnami.com/bitnami
|
|
||||||
version: 8.6.3
|
|
||||||
digest: sha256:d250c463c1d78ba30a24a338a06a551503c7a736621d974fe4999d2db7f6143e
|
|
||||||
generated: "2021-06-24T11:34:54.625217+08:00"
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
apiVersion: v2
|
|
||||||
name: didi-km
|
|
||||||
description: Logi-KafkaManager
|
|
||||||
|
|
||||||
# A chart can be either an 'application' or a 'library' chart.
|
|
||||||
#
|
|
||||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
|
||||||
# to be deployed.
|
|
||||||
#
|
|
||||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
|
||||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
|
||||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
|
||||||
type: application
|
|
||||||
|
|
||||||
# This is the chart version. This version number should be incremented each time you make changes
|
|
||||||
# to the chart and its templates, including the app version.
|
|
||||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
|
||||||
version: 0.1.0
|
|
||||||
|
|
||||||
# This is the version number of the application being deployed. This version number should be
|
|
||||||
# incremented each time you make changes to the application. Versions are not expected to
|
|
||||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
|
||||||
# It is recommended to use it with quotes.
|
|
||||||
appVersion: "2.4.2"
|
|
||||||
dependencies:
|
|
||||||
- condition: mysql.enabled
|
|
||||||
name: mysql
|
|
||||||
repository: https://charts.bitnami.com/bitnami
|
|
||||||
version: 8.x.x
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
1. Get the application URL by running these commands:
|
|
||||||
{{- if .Values.ingress.enabled }}
|
|
||||||
{{- range $host := .Values.ingress.hosts }}
|
|
||||||
{{- range .paths }}
|
|
||||||
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- else if contains "NodePort" .Values.service.type }}
|
|
||||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "didi-km.fullname" . }})
|
|
||||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
|
||||||
echo http://$NODE_IP:$NODE_PORT
|
|
||||||
{{- else if contains "LoadBalancer" .Values.service.type }}
|
|
||||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
|
||||||
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "didi-km.fullname" . }}'
|
|
||||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "didi-km.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
|
|
||||||
echo http://$SERVICE_IP:{{ .Values.service.port }}
|
|
||||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
|
||||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "didi-km.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
|
||||||
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
|
|
||||||
echo "Visit http://127.0.0.1:8080 to use your application"
|
|
||||||
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
|
|
||||||
{{- end }}
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
{{/*
|
|
||||||
Expand the name of the chart.
|
|
||||||
*/}}
|
|
||||||
{{- define "didi-km.name" -}}
|
|
||||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{/*
|
|
||||||
Create a default fully qualified app name.
|
|
||||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
|
||||||
If release name contains chart name it will be used as a full name.
|
|
||||||
*/}}
|
|
||||||
{{- define "didi-km.fullname" -}}
|
|
||||||
{{- if .Values.fullnameOverride }}
|
|
||||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
|
||||||
{{- else }}
|
|
||||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
|
||||||
{{- if contains $name .Release.Name }}
|
|
||||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
|
||||||
{{- else }}
|
|
||||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{/*
|
|
||||||
Create chart name and version as used by the chart label.
|
|
||||||
*/}}
|
|
||||||
{{- define "didi-km.chart" -}}
|
|
||||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{/*
|
|
||||||
Common labels
|
|
||||||
*/}}
|
|
||||||
{{- define "didi-km.labels" -}}
|
|
||||||
helm.sh/chart: {{ include "didi-km.chart" . }}
|
|
||||||
{{ include "didi-km.selectorLabels" . }}
|
|
||||||
{{- if .Chart.AppVersion }}
|
|
||||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
|
||||||
{{- end }}
|
|
||||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{/*
|
|
||||||
Selector labels
|
|
||||||
*/}}
|
|
||||||
{{- define "didi-km.selectorLabels" -}}
|
|
||||||
app.kubernetes.io/name: {{ include "didi-km.name" . }}
|
|
||||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{/*
|
|
||||||
Create the name of the service account to use
|
|
||||||
*/}}
|
|
||||||
{{- define "didi-km.serviceAccountName" -}}
|
|
||||||
{{- if .Values.serviceAccount.create }}
|
|
||||||
{{- default (include "didi-km.fullname" .) .Values.serviceAccount.name }}
|
|
||||||
{{- else }}
|
|
||||||
{{- default "default" .Values.serviceAccount.name }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
@@ -1,101 +0,0 @@
|
|||||||
{{- define "datasource.mysql" -}}
|
|
||||||
{{- if .Values.mysql.enabled }}
|
|
||||||
{{- printf "%s-mysql" (include "didi-km.fullname" .) -}}
|
|
||||||
{{- else -}}
|
|
||||||
{{- printf "%s" .Values.externalDatabase.host -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- end -}}
|
|
||||||
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: {{ include "didi-km.fullname" . }}-configs
|
|
||||||
labels:
|
|
||||||
{{- include "didi-km.labels" . | nindent 4 }}
|
|
||||||
data:
|
|
||||||
application.yml: |
|
|
||||||
server:
|
|
||||||
port: 8080
|
|
||||||
tomcat:
|
|
||||||
accept-count: 1000
|
|
||||||
max-connections: 10000
|
|
||||||
max-threads: 800
|
|
||||||
min-spare-threads: 100
|
|
||||||
|
|
||||||
spring:
|
|
||||||
application:
|
|
||||||
name: kafkamanager
|
|
||||||
datasource:
|
|
||||||
kafka-manager:
|
|
||||||
jdbc-url: jdbc:mysql://{{ include "datasource.mysql" . }}:3306/{{ .Values.mysql.auth.database }}?characterEncoding=UTF-8&serverTimezone=GMT%2B8&useSSL=false
|
|
||||||
username: {{ .Values.mysql.auth.username }}
|
|
||||||
password: {{ .Values.mysql.auth.password }}
|
|
||||||
driver-class-name: com.mysql.jdbc.Driver
|
|
||||||
main:
|
|
||||||
allow-bean-definition-overriding: true
|
|
||||||
|
|
||||||
profiles:
|
|
||||||
active: dev
|
|
||||||
servlet:
|
|
||||||
multipart:
|
|
||||||
max-file-size: 100MB
|
|
||||||
max-request-size: 100MB
|
|
||||||
|
|
||||||
logging:
|
|
||||||
config: classpath:logback-spring.xml
|
|
||||||
|
|
||||||
custom:
|
|
||||||
idc: cn
|
|
||||||
jmx:
|
|
||||||
max-conn: 20
|
|
||||||
store-metrics-task:
|
|
||||||
community:
|
|
||||||
broker-metrics-enabled: true
|
|
||||||
topic-metrics-enabled: true
|
|
||||||
didi:
|
|
||||||
app-topic-metrics-enabled: false
|
|
||||||
topic-request-time-metrics-enabled: false
|
|
||||||
topic-throttled-metrics: false
|
|
||||||
save-days: 7
|
|
||||||
|
|
||||||
# 任务相关的开关
|
|
||||||
task:
|
|
||||||
op:
|
|
||||||
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
|
|
||||||
|
|
||||||
account:
|
|
||||||
# ldap settings
|
|
||||||
ldap:
|
|
||||||
enabled: false
|
|
||||||
authUserRegistration: false
|
|
||||||
|
|
||||||
kcm:
|
|
||||||
enabled: false
|
|
||||||
storage:
|
|
||||||
base-url: http://127.0.0.1
|
|
||||||
n9e:
|
|
||||||
base-url: http://127.0.0.1:8004
|
|
||||||
user-token: 12345678
|
|
||||||
timeout: 300
|
|
||||||
account: root
|
|
||||||
script-file: kcm_script.sh
|
|
||||||
|
|
||||||
monitor:
|
|
||||||
enabled: false
|
|
||||||
n9e:
|
|
||||||
nid: 2
|
|
||||||
user-token: 1234567890
|
|
||||||
mon:
|
|
||||||
base-url: http://127.0.0.1:8032
|
|
||||||
sink:
|
|
||||||
base-url: http://127.0.0.1:8006
|
|
||||||
rdb:
|
|
||||||
base-url: http://127.0.0.1:80
|
|
||||||
|
|
||||||
notify:
|
|
||||||
kafka:
|
|
||||||
cluster-id: 95
|
|
||||||
topic-name: didi-kafka-notify
|
|
||||||
order:
|
|
||||||
detail-url: http://127.0.0.1
|
|
||||||
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: {{ include "didi-km.fullname" . }}
|
|
||||||
labels:
|
|
||||||
{{- include "didi-km.labels" . | nindent 4 }}
|
|
||||||
spec:
|
|
||||||
{{- if not .Values.autoscaling.enabled }}
|
|
||||||
replicas: {{ .Values.replicaCount }}
|
|
||||||
{{- end }}
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
{{- include "didi-km.selectorLabels" . | nindent 6 }}
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
{{- with .Values.podAnnotations }}
|
|
||||||
annotations:
|
|
||||||
{{- toYaml . | nindent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
labels:
|
|
||||||
{{- include "didi-km.selectorLabels" . | nindent 8 }}
|
|
||||||
spec:
|
|
||||||
{{- with .Values.imagePullSecrets }}
|
|
||||||
imagePullSecrets:
|
|
||||||
{{- toYaml . | nindent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
serviceAccountName: {{ include "didi-km.serviceAccountName" . }}
|
|
||||||
securityContext:
|
|
||||||
{{- toYaml .Values.podSecurityContext | nindent 8 }}
|
|
||||||
containers:
|
|
||||||
- name: {{ .Chart.Name }}
|
|
||||||
securityContext:
|
|
||||||
{{- toYaml .Values.securityContext | nindent 12 }}
|
|
||||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
|
|
||||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
|
||||||
ports:
|
|
||||||
- name: http
|
|
||||||
containerPort: 8080
|
|
||||||
protocol: TCP
|
|
||||||
- name: jmx-metrics
|
|
||||||
containerPort: 9999
|
|
||||||
protocol: TCP
|
|
||||||
resources:
|
|
||||||
{{- toYaml .Values.resources | nindent 12 }}
|
|
||||||
volumeMounts:
|
|
||||||
- name: configs
|
|
||||||
mountPath: /tmp/application.yml
|
|
||||||
subPath: application.yml
|
|
||||||
{{- with .Values.nodeSelector }}
|
|
||||||
nodeSelector:
|
|
||||||
{{- toYaml . | nindent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- with .Values.affinity }}
|
|
||||||
affinity:
|
|
||||||
{{- toYaml . | nindent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- with .Values.tolerations }}
|
|
||||||
tolerations:
|
|
||||||
{{- toYaml . | nindent 8 }}
|
|
||||||
{{- end }}
|
|
||||||
volumes:
|
|
||||||
- name: configs
|
|
||||||
configMap:
|
|
||||||
name: {{ include "didi-km.fullname" . }}-configs
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
{{- if .Values.autoscaling.enabled }}
|
|
||||||
apiVersion: autoscaling/v2beta1
|
|
||||||
kind: HorizontalPodAutoscaler
|
|
||||||
metadata:
|
|
||||||
name: {{ include "didi-km.fullname" . }}
|
|
||||||
labels:
|
|
||||||
{{- include "didi-km.labels" . | nindent 4 }}
|
|
||||||
spec:
|
|
||||||
scaleTargetRef:
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
name: {{ include "didi-km.fullname" . }}
|
|
||||||
minReplicas: {{ .Values.autoscaling.minReplicas }}
|
|
||||||
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
|
|
||||||
metrics:
|
|
||||||
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
|
|
||||||
- type: Resource
|
|
||||||
resource:
|
|
||||||
name: cpu
|
|
||||||
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
|
|
||||||
{{- end }}
|
|
||||||
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
|
||||||
- type: Resource
|
|
||||||
resource:
|
|
||||||
name: memory
|
|
||||||
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
@@ -1,41 +0,0 @@
|
|||||||
{{- if .Values.ingress.enabled -}}
|
|
||||||
{{- $fullName := include "didi-km.fullname" . -}}
|
|
||||||
{{- $svcPort := .Values.service.port -}}
|
|
||||||
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
|
|
||||||
apiVersion: networking.k8s.io/v1beta1
|
|
||||||
{{- else -}}
|
|
||||||
apiVersion: extensions/v1beta1
|
|
||||||
{{- end }}
|
|
||||||
kind: Ingress
|
|
||||||
metadata:
|
|
||||||
name: {{ $fullName }}
|
|
||||||
labels:
|
|
||||||
{{- include "didi-km.labels" . | nindent 4 }}
|
|
||||||
{{- with .Values.ingress.annotations }}
|
|
||||||
annotations:
|
|
||||||
{{- toYaml . | nindent 4 }}
|
|
||||||
{{- end }}
|
|
||||||
spec:
|
|
||||||
{{- if .Values.ingress.tls }}
|
|
||||||
tls:
|
|
||||||
{{- range .Values.ingress.tls }}
|
|
||||||
- hosts:
|
|
||||||
{{- range .hosts }}
|
|
||||||
- {{ . | quote }}
|
|
||||||
{{- end }}
|
|
||||||
secretName: {{ .secretName }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
rules:
|
|
||||||
{{- range .Values.ingress.hosts }}
|
|
||||||
- host: {{ .host | quote }}
|
|
||||||
http:
|
|
||||||
paths:
|
|
||||||
{{- range .paths }}
|
|
||||||
- path: {{ .path }}
|
|
||||||
backend:
|
|
||||||
serviceName: {{ $fullName }}
|
|
||||||
servicePort: {{ $svcPort }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: {{ include "didi-km.fullname" . }}
|
|
||||||
labels:
|
|
||||||
{{- include "didi-km.labels" . | nindent 4 }}
|
|
||||||
spec:
|
|
||||||
type: {{ .Values.service.type }}
|
|
||||||
ports:
|
|
||||||
- port: {{ .Values.service.port }}
|
|
||||||
targetPort: http
|
|
||||||
protocol: TCP
|
|
||||||
name: http
|
|
||||||
selector:
|
|
||||||
{{- include "didi-km.selectorLabels" . | nindent 4 }}
|
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
{{- if .Values.serviceAccount.create -}}
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: {{ include "didi-km.serviceAccountName" . }}
|
|
||||||
labels:
|
|
||||||
{{- include "didi-km.labels" . | nindent 4 }}
|
|
||||||
{{- with .Values.serviceAccount.annotations }}
|
|
||||||
annotations:
|
|
||||||
{{- toYaml . | nindent 4 }}
|
|
||||||
{{- end }}
|
|
||||||
{{- end }}
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: "{{ include "didi-km.fullname" . }}-test-connection"
|
|
||||||
labels:
|
|
||||||
{{- include "didi-km.labels" . | nindent 4 }}
|
|
||||||
annotations:
|
|
||||||
"helm.sh/hook": test
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: wget
|
|
||||||
image: busybox
|
|
||||||
command: ['wget']
|
|
||||||
args: ['{{ include "didi-km.fullname" . }}:{{ .Values.service.port }}']
|
|
||||||
restartPolicy: Never
|
|
||||||
@@ -1,93 +0,0 @@
|
|||||||
# Default values for didi-km.
|
|
||||||
# This is a YAML-formatted file.
|
|
||||||
# Declare variables to be passed into your templates.
|
|
||||||
|
|
||||||
replicaCount: 1
|
|
||||||
|
|
||||||
image:
|
|
||||||
repository: docker.io/fengxsong/logi-kafka-manager
|
|
||||||
pullPolicy: IfNotPresent
|
|
||||||
# Overrides the image tag whose default is the chart appVersion.
|
|
||||||
tag: "v2.4.2"
|
|
||||||
|
|
||||||
imagePullSecrets: []
|
|
||||||
nameOverride: ""
|
|
||||||
# fullnameOverride must set same as release name
|
|
||||||
fullnameOverride: "km"
|
|
||||||
|
|
||||||
serviceAccount:
|
|
||||||
# Specifies whether a service account should be created
|
|
||||||
create: true
|
|
||||||
# Annotations to add to the service account
|
|
||||||
annotations: {}
|
|
||||||
# The name of the service account to use.
|
|
||||||
# If not set and create is true, a name is generated using the fullname template
|
|
||||||
name: ""
|
|
||||||
|
|
||||||
podAnnotations: {}
|
|
||||||
|
|
||||||
podSecurityContext: {}
|
|
||||||
# fsGroup: 2000
|
|
||||||
|
|
||||||
securityContext: {}
|
|
||||||
# capabilities:
|
|
||||||
# drop:
|
|
||||||
# - ALL
|
|
||||||
# readOnlyRootFilesystem: true
|
|
||||||
# runAsNonRoot: true
|
|
||||||
# runAsUser: 1000
|
|
||||||
|
|
||||||
service:
|
|
||||||
type: ClusterIP
|
|
||||||
port: 8080
|
|
||||||
|
|
||||||
ingress:
|
|
||||||
enabled: false
|
|
||||||
annotations: {}
|
|
||||||
# kubernetes.io/ingress.class: nginx
|
|
||||||
# kubernetes.io/tls-acme: "true"
|
|
||||||
hosts:
|
|
||||||
- host: chart-example.local
|
|
||||||
paths: []
|
|
||||||
tls: []
|
|
||||||
# - secretName: chart-example-tls
|
|
||||||
# hosts:
|
|
||||||
# - chart-example.local
|
|
||||||
|
|
||||||
resources:
|
|
||||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
|
||||||
# choice for the user. This also increases chances charts run on environments with little
|
|
||||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
|
||||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
|
||||||
limits:
|
|
||||||
cpu: 500m
|
|
||||||
memory: 2048Mi
|
|
||||||
requests:
|
|
||||||
cpu: 100m
|
|
||||||
memory: 200Mi
|
|
||||||
|
|
||||||
autoscaling:
|
|
||||||
enabled: false
|
|
||||||
minReplicas: 1
|
|
||||||
maxReplicas: 100
|
|
||||||
targetCPUUtilizationPercentage: 80
|
|
||||||
# targetMemoryUtilizationPercentage: 80
|
|
||||||
|
|
||||||
nodeSelector: {}
|
|
||||||
|
|
||||||
tolerations: []
|
|
||||||
|
|
||||||
affinity: {}
|
|
||||||
|
|
||||||
# more configurations are set with configmap in file template/configmap.yaml
|
|
||||||
externalDatabase:
|
|
||||||
host: ""
|
|
||||||
mysql:
|
|
||||||
# if enabled is set to false, then you should manually specified externalDatabase.host
|
|
||||||
enabled: true
|
|
||||||
architecture: standalone
|
|
||||||
auth:
|
|
||||||
rootPassword: "s3cretR00t"
|
|
||||||
database: "logi_kafka_manager"
|
|
||||||
username: "logi_kafka_manager"
|
|
||||||
password: "n0tp@55w0rd"
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
cd `dirname $0`/../target
|
|
||||||
target_dir=`pwd`
|
|
||||||
|
|
||||||
pid=`ps ax | grep -i 'kafka-manager' | grep ${target_dir} | grep java | grep -v grep | awk '{print $1}'`
|
|
||||||
if [ -z "$pid" ] ; then
|
|
||||||
echo "No kafka-manager running."
|
|
||||||
exit -1;
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "The kafka-manager (${pid}) is running..."
|
|
||||||
|
|
||||||
kill ${pid}
|
|
||||||
|
|
||||||
echo "Send shutdown request to kafka-manager (${pid}) OK"
|
|
||||||
@@ -1,81 +0,0 @@
|
|||||||
error_exit ()
|
|
||||||
{
|
|
||||||
echo "ERROR: $1 !!"
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=$HOME/jdk/java
|
|
||||||
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=/usr/java
|
|
||||||
[ ! -e "$JAVA_HOME/bin/java" ] && unset JAVA_HOME
|
|
||||||
|
|
||||||
if [ -z "$JAVA_HOME" ]; then
|
|
||||||
if $darwin; then
|
|
||||||
|
|
||||||
if [ -x '/usr/libexec/java_home' ] ; then
|
|
||||||
export JAVA_HOME=`/usr/libexec/java_home`
|
|
||||||
|
|
||||||
elif [ -d "/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home" ]; then
|
|
||||||
export JAVA_HOME="/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
JAVA_PATH=`dirname $(readlink -f $(which javac))`
|
|
||||||
if [ "x$JAVA_PATH" != "x" ]; then
|
|
||||||
export JAVA_HOME=`dirname $JAVA_PATH 2>/dev/null`
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
if [ -z "$JAVA_HOME" ]; then
|
|
||||||
error_exit "Please set the JAVA_HOME variable in your environment, We need java(x64)! jdk8 or later is better!"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
export WEB_SERVER="kafka-manager"
|
|
||||||
export JAVA_HOME
|
|
||||||
export JAVA="$JAVA_HOME/bin/java"
|
|
||||||
export BASE_DIR=`cd $(dirname $0)/..; pwd`
|
|
||||||
export CUSTOM_SEARCH_LOCATIONS=file:${BASE_DIR}/conf/
|
|
||||||
|
|
||||||
|
|
||||||
#===========================================================================================
|
|
||||||
# JVM Configuration
|
|
||||||
#===========================================================================================
|
|
||||||
|
|
||||||
JAVA_OPT="${JAVA_OPT} -server -Xms2g -Xmx2g -Xmn1g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
|
|
||||||
JAVA_OPT="${JAVA_OPT} -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=${BASE_DIR}/logs/java_heapdump.hprof"
|
|
||||||
|
|
||||||
## jdk版本高的情况 有些 参数废弃了
|
|
||||||
JAVA_MAJOR_VERSION=$($JAVA -version 2>&1 | sed -E -n 's/.* version "([0-9]*).*$/\1/p')
|
|
||||||
if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then
|
|
||||||
JAVA_OPT="${JAVA_OPT} -Xlog:gc*:file=${BASE_DIR}/logs/km_gc.log:time,tags:filecount=10,filesize=102400"
|
|
||||||
else
|
|
||||||
JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${JAVA_HOME}/jre/lib/ext:${JAVA_HOME}/lib/ext"
|
|
||||||
JAVA_OPT="${JAVA_OPT} -Xloggc:${BASE_DIR}/logs/km_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
|
|
||||||
|
|
||||||
fi
|
|
||||||
|
|
||||||
JAVA_OPT="${JAVA_OPT} -jar ${BASE_DIR}/target/${WEB_SERVER}.jar"
|
|
||||||
JAVA_OPT="${JAVA_OPT} --spring.config.additional-location=${CUSTOM_SEARCH_LOCATIONS}"
|
|
||||||
JAVA_OPT="${JAVA_OPT} --logging.config=${BASE_DIR}/conf/logback-spring.xml"
|
|
||||||
JAVA_OPT="${JAVA_OPT} --server.max-http-header-size=524288"
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if [ ! -d "${BASE_DIR}/logs" ]; then
|
|
||||||
mkdir ${BASE_DIR}/logs
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$JAVA ${JAVA_OPT}"
|
|
||||||
|
|
||||||
# check the start.out log output file
|
|
||||||
if [ ! -f "${BASE_DIR}/logs/start.out" ]; then
|
|
||||||
touch "${BASE_DIR}/logs/start.out"
|
|
||||||
fi
|
|
||||||
# start
|
|
||||||
echo -e "---- 启动脚本 ------\n $JAVA ${JAVA_OPT}" > ${BASE_DIR}/logs/start.out 2>&1 &
|
|
||||||
|
|
||||||
|
|
||||||
nohup $JAVA ${JAVA_OPT} >> ${BASE_DIR}/logs/start.out 2>&1 &
|
|
||||||
|
|
||||||
echo "${WEB_SERVER} is starting,you can check the ${BASE_DIR}/logs/start.out"
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
|
|
||||||
## kafka-manager的配置文件,该文件中的配置会覆盖默认配置
|
|
||||||
## 下面的配置信息基本就是jar中的 application.yml默认配置了;
|
|
||||||
## 可以只修改自己变更的配置,其他的删除就行了; 比如只配置一下mysql
|
|
||||||
|
|
||||||
|
|
||||||
server:
|
|
||||||
port: 8080
|
|
||||||
tomcat:
|
|
||||||
accept-count: 1000
|
|
||||||
max-connections: 10000
|
|
||||||
max-threads: 800
|
|
||||||
min-spare-threads: 100
|
|
||||||
|
|
||||||
spring:
|
|
||||||
application:
|
|
||||||
name: kafkamanager
|
|
||||||
profiles:
|
|
||||||
active: dev
|
|
||||||
datasource:
|
|
||||||
kafka-manager:
|
|
||||||
jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
|
|
||||||
username: root
|
|
||||||
password: 123456
|
|
||||||
driver-class-name: com.mysql.cj.jdbc.Driver
|
|
||||||
main:
|
|
||||||
allow-bean-definition-overriding: true
|
|
||||||
|
|
||||||
@@ -1,116 +0,0 @@
|
|||||||
|
|
||||||
## kafka-manager的配置文件,该文件中的配置会覆盖默认配置
|
|
||||||
## 下面的配置信息基本就是jar中的 application.yml默认配置了;
|
|
||||||
## 可以只修改自己变更的配置,其他的删除就行了; 比如只配置一下mysql
|
|
||||||
|
|
||||||
|
|
||||||
server:
|
|
||||||
port: 8080
|
|
||||||
tomcat:
|
|
||||||
accept-count: 1000
|
|
||||||
max-connections: 10000
|
|
||||||
max-threads: 800
|
|
||||||
min-spare-threads: 100
|
|
||||||
|
|
||||||
spring:
|
|
||||||
application:
|
|
||||||
name: kafkamanager
|
|
||||||
profiles:
|
|
||||||
active: dev
|
|
||||||
datasource:
|
|
||||||
kafka-manager:
|
|
||||||
jdbc-url: jdbc:mysql://localhost:3306/logi_kafka_manager?characterEncoding=UTF-8&useSSL=false&serverTimezone=GMT%2B8
|
|
||||||
username: root
|
|
||||||
password: 123456
|
|
||||||
driver-class-name: com.mysql.cj.jdbc.Driver
|
|
||||||
main:
|
|
||||||
allow-bean-definition-overriding: true
|
|
||||||
|
|
||||||
|
|
||||||
servlet:
|
|
||||||
multipart:
|
|
||||||
max-file-size: 100MB
|
|
||||||
max-request-size: 100MB
|
|
||||||
|
|
||||||
logging:
|
|
||||||
config: classpath:logback-spring.xml
|
|
||||||
|
|
||||||
custom:
|
|
||||||
idc: cn # 部署的数据中心, 忽略该配置, 后续会进行删除
|
|
||||||
jmx:
|
|
||||||
max-conn: 10 # 2.3版本配置不在这个地方生效
|
|
||||||
store-metrics-task:
|
|
||||||
community:
|
|
||||||
broker-metrics-enabled: true # 社区部分broker metrics信息收集开关, 关闭之后metrics信息将不会进行收集及写DB
|
|
||||||
topic-metrics-enabled: true # 社区部分topic的metrics信息收集开关, 关闭之后metrics信息将不会进行收集及写DB
|
|
||||||
didi:
|
|
||||||
app-topic-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
|
||||||
topic-request-time-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
|
||||||
topic-throttled-metrics: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
|
||||||
save-days: 7 #指标在DB中保持的天数,-1表示永久保存,7表示保存近7天的数据
|
|
||||||
|
|
||||||
# 任务相关的开关
|
|
||||||
task:
|
|
||||||
op:
|
|
||||||
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
|
|
||||||
order-auto-exec: # 工单自动化审批线程的开关
|
|
||||||
topic-enabled: false # Topic工单自动化审批开关, false:关闭自动化审批, true:开启
|
|
||||||
app-enabled: false # App工单自动化审批开关, false:关闭自动化审批, true:开启
|
|
||||||
|
|
||||||
# ldap相关的配置
|
|
||||||
account:
|
|
||||||
ldap:
|
|
||||||
enabled: false
|
|
||||||
url: ldap://127.0.0.1:389/
|
|
||||||
basedn: dc=tsign,dc=cn
|
|
||||||
factory: com.sun.jndi.ldap.LdapCtxFactory
|
|
||||||
filter: sAMAccountName
|
|
||||||
security:
|
|
||||||
authentication: simple
|
|
||||||
principal: cn=admin,dc=tsign,dc=cn
|
|
||||||
credentials: admin
|
|
||||||
auth-user-registration: true
|
|
||||||
auth-user-registration-role: normal
|
|
||||||
|
|
||||||
# 集群升级部署相关的功能,需要配合夜莺及S3进行使用
|
|
||||||
kcm:
|
|
||||||
enabled: false
|
|
||||||
s3:
|
|
||||||
endpoint: s3.didiyunapi.com
|
|
||||||
access-key: 1234567890
|
|
||||||
secret-key: 0987654321
|
|
||||||
bucket: logi-kafka
|
|
||||||
n9e:
|
|
||||||
base-url: http://127.0.0.1:8004
|
|
||||||
user-token: 12345678
|
|
||||||
timeout: 300
|
|
||||||
account: root
|
|
||||||
script-file: kcm_script.sh
|
|
||||||
|
|
||||||
# 监控告警相关的功能,需要配合夜莺进行使用
|
|
||||||
# enabled: 表示是否开启监控告警的功能, true: 开启, false: 不开启
|
|
||||||
# n9e.nid: 夜莺的节点ID
|
|
||||||
# n9e.user-token: 用户的密钥,在夜莺的个人设置中
|
|
||||||
# n9e.mon.base-url: 监控地址
|
|
||||||
# n9e.sink.base-url: 数据上报地址
|
|
||||||
# n9e.rdb.base-url: 用户资源中心地址
|
|
||||||
|
|
||||||
monitor:
|
|
||||||
enabled: false
|
|
||||||
n9e:
|
|
||||||
nid: 2
|
|
||||||
user-token: 1234567890
|
|
||||||
mon:
|
|
||||||
base-url: http://127.0.0.1:8000 # 夜莺v4版本,默认端口统一调整为了8000
|
|
||||||
sink:
|
|
||||||
base-url: http://127.0.0.1:8000 # 夜莺v4版本,默认端口统一调整为了8000
|
|
||||||
rdb:
|
|
||||||
base-url: http://127.0.0.1:8000 # 夜莺v4版本,默认端口统一调整为了8000
|
|
||||||
|
|
||||||
|
|
||||||
notify: # 通知的功能
|
|
||||||
kafka: # 默认通知发送到kafka的指定Topic中
|
|
||||||
cluster-id: 95 # Topic的集群ID
|
|
||||||
topic-name: didi-kafka-notify # Topic名称
|
|
||||||
order: # 部署的KM的地址
|
|
||||||
detail-url: http://127.0.0.1
|
|
||||||
@@ -1,591 +0,0 @@
|
|||||||
-- create database
|
|
||||||
CREATE DATABASE logi_kafka_manager;
|
|
||||||
|
|
||||||
USE logi_kafka_manager;
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `account`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `account`;
|
|
||||||
CREATE TABLE `account` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`username` varchar(128) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '用户名',
|
|
||||||
`password` varchar(128) NOT NULL DEFAULT '' COMMENT '密码',
|
|
||||||
`role` tinyint(8) NOT NULL DEFAULT '0' COMMENT '角色类型, 0:普通用户 1:研发 2:运维',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '0标识使用中,-1标识已废弃',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_username` (`username`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='账号表';
|
|
||||||
INSERT INTO account(username, password, role) VALUES ('admin', '21232f297a57a5a743894a0e4a801fc3', 2);
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `app`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `app`;
|
|
||||||
CREATE TABLE `app` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`app_id` varchar(128) NOT NULL DEFAULT '' COMMENT '应用id',
|
|
||||||
`name` varchar(192) NOT NULL DEFAULT '' COMMENT '应用名称',
|
|
||||||
`password` varchar(256) NOT NULL DEFAULT '' COMMENT '应用密码',
|
|
||||||
`type` int(11) NOT NULL DEFAULT '0' COMMENT '类型, 0:普通用户, 1:超级用户',
|
|
||||||
`applicant` varchar(64) NOT NULL DEFAULT '' COMMENT '申请人',
|
|
||||||
`principals` text COMMENT '应用负责人',
|
|
||||||
`description` text COMMENT '应用描述',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_name` (`name`),
|
|
||||||
UNIQUE KEY `uniq_app_id` (`app_id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='应用信息';
|
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `authority`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `authority`;
|
|
||||||
CREATE TABLE `authority` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`app_id` varchar(128) NOT NULL DEFAULT '' COMMENT '应用id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`access` int(11) NOT NULL DEFAULT '0' COMMENT '0:无权限, 1:读, 2:写, 3:读写',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_app_id_cluster_id_topic_name` (`app_id`,`cluster_id`,`topic_name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='权限信息(kafka-manager)';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `broker`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `broker`;
|
|
||||||
CREATE TABLE `broker` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`broker_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'brokerid',
|
|
||||||
`host` varchar(128) NOT NULL DEFAULT '' COMMENT 'broker主机名',
|
|
||||||
`port` int(16) NOT NULL DEFAULT '-1' COMMENT 'broker端口',
|
|
||||||
`timestamp` bigint(20) NOT NULL DEFAULT '-1' COMMENT '启动时间',
|
|
||||||
`max_avg_bytes_in` bigint(20) NOT NULL DEFAULT '-1' COMMENT '峰值的均值流量',
|
|
||||||
`version` varchar(128) NOT NULL DEFAULT '' COMMENT 'broker版本',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态: 0有效,-1无效',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_broker_id` (`cluster_id`,`broker_id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='broker信息表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `broker_metrics`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `broker_metrics`;
|
|
||||||
CREATE TABLE `broker_metrics` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`broker_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'brokerid',
|
|
||||||
`metrics` text COMMENT '指标',
|
|
||||||
`messages_in` double(53,2) NOT NULL DEFAULT '0.00' COMMENT '每秒消息数流入',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
KEY `idx_cluster_id_broker_id_gmt_create` (`cluster_id`,`broker_id`,`gmt_create`),
|
|
||||||
KEY `idx_gmt_create` (`gmt_create`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='broker-metric信息表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `cluster`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `cluster`;
|
|
||||||
CREATE TABLE `cluster` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '集群id',
|
|
||||||
`cluster_name` varchar(128) NOT NULL DEFAULT '' COMMENT '集群名称',
|
|
||||||
`zookeeper` varchar(512) NOT NULL DEFAULT '' COMMENT 'zk地址',
|
|
||||||
`bootstrap_servers` varchar(512) NOT NULL DEFAULT '' COMMENT 'server地址',
|
|
||||||
`kafka_version` varchar(32) NOT NULL DEFAULT '' COMMENT 'kafka版本',
|
|
||||||
`security_properties` text COMMENT 'Kafka安全认证参数',
|
|
||||||
`jmx_properties` text COMMENT 'JMX配置',
|
|
||||||
`status` tinyint(4) NOT NULL DEFAULT '1' COMMENT ' 监控标记, 0表示未监控, 1表示监控中',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_name` (`cluster_name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='cluster信息表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `cluster_metrics`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `cluster_metrics`;
|
|
||||||
CREATE TABLE `cluster_metrics` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
|
||||||
`metrics` text COMMENT '指标',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
KEY `idx_cluster_id_gmt_create` (`cluster_id`,`gmt_create`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='clustermetrics信息';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `cluster_tasks`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `cluster_tasks`;
|
|
||||||
CREATE TABLE `cluster_tasks` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`uuid` varchar(128) NOT NULL DEFAULT '' COMMENT '任务UUID',
|
|
||||||
`cluster_id` bigint(128) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`task_type` varchar(128) NOT NULL DEFAULT '' COMMENT '任务类型',
|
|
||||||
`kafka_package` text COMMENT 'kafka包',
|
|
||||||
`kafka_package_md5` varchar(128) NOT NULL DEFAULT '' COMMENT 'kafka包的md5',
|
|
||||||
`server_properties` text COMMENT 'kafkaserver配置',
|
|
||||||
`server_properties_md5` varchar(128) NOT NULL DEFAULT '' COMMENT '配置文件的md5',
|
|
||||||
`agent_task_id` bigint(128) NOT NULL DEFAULT '-1' COMMENT '任务id',
|
|
||||||
`agent_rollback_task_id` bigint(128) NOT NULL DEFAULT '-1' COMMENT '回滚任务id',
|
|
||||||
`host_list` text COMMENT '升级的主机',
|
|
||||||
`pause_host_list` text COMMENT '暂停点',
|
|
||||||
`rollback_host_list` text COMMENT '回滚机器列表',
|
|
||||||
`rollback_pause_host_list` text COMMENT '回滚暂停机器列表',
|
|
||||||
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '操作人',
|
|
||||||
`task_status` int(11) NOT NULL DEFAULT '0' COMMENT '任务状态',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='集群任务(集群升级部署)';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `config`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `config`;
|
|
||||||
CREATE TABLE `config` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`config_key` varchar(128) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '配置key',
|
|
||||||
`config_value` text COMMENT '配置value',
|
|
||||||
`config_description` text COMMENT '备注说明',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '0标识使用中,-1标识已废弃',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_config_key` (`config_key`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='配置表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `controller`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `controller`;
|
|
||||||
CREATE TABLE `controller` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`broker_id` int(16) NOT NULL DEFAULT '-1' COMMENT 'brokerid',
|
|
||||||
`host` varchar(256) NOT NULL DEFAULT '' COMMENT '主机名',
|
|
||||||
`timestamp` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'controller变更时间',
|
|
||||||
`version` int(16) NOT NULL DEFAULT '-1' COMMENT 'controller格式版本',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_broker_id_timestamp` (`cluster_id`,`broker_id`,`timestamp`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='controller记录表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `gateway_config`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `gateway_config`;
|
|
||||||
CREATE TABLE `gateway_config` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`type` varchar(128) NOT NULL DEFAULT '' COMMENT '配置类型',
|
|
||||||
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '配置名称',
|
|
||||||
`value` text COMMENT '配置值',
|
|
||||||
`version` bigint(20) unsigned NOT NULL DEFAULT '1' COMMENT '版本信息',
|
|
||||||
`description` text COMMENT '描述信息',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_type_name` (`type`,`name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='gateway配置';
|
|
||||||
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_QUEUE_SIZE', 'SD_QUEUE_SIZE', 100000000, 1, '任意集群队列大小');
|
|
||||||
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_APP_RATE', 'SD_APP_RATE', 100000000, 1, '任意一个App限速');
|
|
||||||
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_IP_RATE', 'SD_IP_RATE', 100000000, 1, '任意一个IP限速');
|
|
||||||
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_SP_RATE', 'app_01234567', 100000000, 1, '指定App限速');
|
|
||||||
INSERT INTO gateway_config(type, name, value, `version`, `description`) values('SD_SP_RATE', '192.168.0.1', 100000000, 1, '指定IP限速');
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `heartbeat`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `heartbeat`;
|
|
||||||
CREATE TABLE `heartbeat` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`ip` varchar(128) NOT NULL DEFAULT '' COMMENT '主机ip',
|
|
||||||
`hostname` varchar(256) NOT NULL DEFAULT '' COMMENT '主机名',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_ip` (`ip`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='心跳信息';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `kafka_acl`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `kafka_acl`;
|
|
||||||
CREATE TABLE `kafka_acl` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`app_id` varchar(128) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '用户id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`access` int(11) NOT NULL DEFAULT '0' COMMENT '0:无权限, 1:读, 2:写, 3:读写',
|
|
||||||
`operation` int(11) NOT NULL DEFAULT '0' COMMENT '0:创建, 1:更新 2:删除, 以最新的一条数据为准',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='权限信息(kafka-broker)';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `kafka_bill`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `kafka_bill`;
|
|
||||||
CREATE TABLE `kafka_bill` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`principal` varchar(64) NOT NULL DEFAULT '' COMMENT '负责人',
|
|
||||||
`quota` double(53,2) NOT NULL DEFAULT '0.00' COMMENT '配额, 单位mb/s',
|
|
||||||
`cost` double(53,2) NOT NULL DEFAULT '0.00' COMMENT '成本, 单位元',
|
|
||||||
`cost_type` int(16) NOT NULL DEFAULT '0' COMMENT '成本类型, 0:共享集群, 1:独享集群, 2:独立集群',
|
|
||||||
`gmt_day` varchar(64) NOT NULL DEFAULT '' COMMENT '计价的日期, 例如2019-02-02的计价结果',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_topic_name_gmt_day` (`cluster_id`,`topic_name`,`gmt_day`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='kafka账单';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `kafka_file`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `kafka_file`;
|
|
||||||
CREATE TABLE `kafka_file` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`storage_name` varchar(128) NOT NULL DEFAULT '' COMMENT '存储位置',
|
|
||||||
`file_name` varchar(128) NOT NULL DEFAULT '' COMMENT '文件名',
|
|
||||||
`file_md5` varchar(256) NOT NULL DEFAULT '' COMMENT '文件md5',
|
|
||||||
`file_type` int(16) NOT NULL DEFAULT '-1' COMMENT '0:kafka压缩包, 1:kafkaserver配置',
|
|
||||||
`description` text COMMENT '备注信息',
|
|
||||||
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '创建用户',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态, 0:正常, -1:删除',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_file_name_storage_name` (`cluster_id`,`file_name`,`storage_name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='文件管理';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `kafka_user`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `kafka_user`;
|
|
||||||
CREATE TABLE `kafka_user` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`app_id` varchar(128) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '应用id',
|
|
||||||
`password` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT '密码',
|
|
||||||
`user_type` int(11) NOT NULL DEFAULT '0' COMMENT '0:普通用户, 1:超级用户',
|
|
||||||
`operation` int(11) NOT NULL DEFAULT '0' COMMENT '0:创建, 1:更新 2:删除, 以最新一条的记录为准',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='kafka用户表';
|
|
||||||
INSERT INTO app(app_id, name, password, type, applicant, principals, description) VALUES ('dkm_admin', 'KM管理员', 'km_kMl4N8as1Kp0CCY', 1, 'admin', 'admin', 'KM管理员应用-谨慎对外提供');
|
|
||||||
INSERT INTO kafka_user(app_id, password, user_type, operation) VALUES ('dkm_admin', 'km_kMl4N8as1Kp0CCY', 1, 0);
|
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `logical_cluster`
|
|
||||||
--
|
|
||||||
|
|
||||||
CREATE TABLE `logical_cluster` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`name` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群名称',
|
|
||||||
`identification` varchar(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识',
|
|
||||||
`mode` int(16) NOT NULL DEFAULT '0' COMMENT '逻辑集群类型, 0:共享集群, 1:独享集群, 2:独立集群',
|
|
||||||
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT '所属应用',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`region_list` varchar(256) NOT NULL DEFAULT '' COMMENT 'regionid列表',
|
|
||||||
`description` text COMMENT '备注说明',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_name` (`name`),
|
|
||||||
UNIQUE KEY `uniq_identification` (`identification`)
|
|
||||||
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8 COMMENT='逻辑集群信息表';
|
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `monitor_rule`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `monitor_rule`;
|
|
||||||
CREATE TABLE `monitor_rule` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`name` varchar(192) NOT NULL DEFAULT '' COMMENT '告警名称',
|
|
||||||
`strategy_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '监控id',
|
|
||||||
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT 'appid',
|
|
||||||
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '操作人',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_name` (`name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='监控规则';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `operate_record`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `operate_record`;
|
|
||||||
CREATE TABLE `operate_record` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`module_id` int(16) NOT NULL DEFAULT '-1' COMMENT '模块类型, 0:topic, 1:应用, 2:配额, 3:权限, 4:集群, -1:未知',
|
|
||||||
`operate_id` int(16) NOT NULL DEFAULT '-1' COMMENT '操作类型, 0:新增, 1:删除, 2:修改',
|
|
||||||
`resource` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称、app名称',
|
|
||||||
`content` text COMMENT '操作内容',
|
|
||||||
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '操作人',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
KEY `idx_module_id_operate_id_operator` (`module_id`,`operate_id`,`operator`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='操作记录';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `reassign_task`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `reassign_task`;
|
|
||||||
CREATE TABLE `reassign_task` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`task_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '任务ID',
|
|
||||||
`name` varchar(256) NOT NULL DEFAULT '' COMMENT '任务名称',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) NOT NULL DEFAULT '' COMMENT 'Topic名称',
|
|
||||||
`partitions` text COMMENT '分区',
|
|
||||||
`reassignment_json` text COMMENT '任务参数',
|
|
||||||
`real_throttle` bigint(20) NOT NULL DEFAULT '0' COMMENT '限流值',
|
|
||||||
`max_throttle` bigint(20) NOT NULL DEFAULT '0' COMMENT '限流上限',
|
|
||||||
`min_throttle` bigint(20) NOT NULL DEFAULT '0' COMMENT '限流下限',
|
|
||||||
`begin_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '开始时间',
|
|
||||||
`operator` varchar(64) NOT NULL DEFAULT '' COMMENT '操作人',
|
|
||||||
`description` varchar(256) NOT NULL DEFAULT '' COMMENT '备注说明',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '任务状态',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '任务创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '任务修改时间',
|
|
||||||
`original_retention_time` bigint(20) NOT NULL DEFAULT '86400000' COMMENT 'Topic存储时间',
|
|
||||||
`reassign_retention_time` bigint(20) NOT NULL DEFAULT '86400000' COMMENT '迁移时的存储时间',
|
|
||||||
`src_brokers` text COMMENT '源Broker',
|
|
||||||
`dest_brokers` text COMMENT '目标Broker',
|
|
||||||
PRIMARY KEY (`id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic迁移信息';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `region`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `region`;
|
|
||||||
CREATE TABLE `region` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`name` varchar(192) NOT NULL DEFAULT '' COMMENT 'region名称',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`broker_list` varchar(256) NOT NULL DEFAULT '' COMMENT 'broker列表',
|
|
||||||
`capacity` bigint(20) NOT NULL DEFAULT '0' COMMENT '容量(B/s)',
|
|
||||||
`real_used` bigint(20) NOT NULL DEFAULT '0' COMMENT '实际使用量(B/s)',
|
|
||||||
`estimate_used` bigint(20) NOT NULL DEFAULT '0' COMMENT '预估使用量(B/s)',
|
|
||||||
`description` text COMMENT '备注说明',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '状态,0正常,1已满',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_name` (`name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='region信息表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic`;
|
|
||||||
CREATE TABLE `topic` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT 'topic所属appid',
|
|
||||||
`peak_bytes_in` bigint(20) NOT NULL DEFAULT '0' COMMENT '峰值流量',
|
|
||||||
`description` text COMMENT '备注信息',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_topic_name` (`cluster_id`,`topic_name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic信息表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic_app_metrics`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic_app_metrics`;
|
|
||||||
CREATE TABLE `topic_app_metrics` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT 'appid',
|
|
||||||
`metrics` text COMMENT '指标',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
KEY `idx_cluster_id_topic_name_app_id_gmt_create` (`cluster_id`,`topic_name`,`app_id`,`gmt_create`),
|
|
||||||
KEY `idx_gmt_create` (`gmt_create`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic app metrics';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic_connections`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic_connections`;
|
|
||||||
CREATE TABLE `topic_connections` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT '应用id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`type` varchar(16) NOT NULL DEFAULT '' COMMENT 'producer or consumer',
|
|
||||||
`ip` varchar(32) NOT NULL DEFAULT '' COMMENT 'ip地址',
|
|
||||||
`client_version` varchar(8) NOT NULL DEFAULT '' COMMENT '客户端版本',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_app_id_cluster_id_topic_name_type_ip_client_version` (`app_id`,`cluster_id`,`topic_name`,`type`,`ip`,`client_version`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic连接信息表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic_expired`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic_expired`;
|
|
||||||
CREATE TABLE `topic_expired` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`produce_connection_num` bigint(20) NOT NULL DEFAULT '0' COMMENT '发送连接数',
|
|
||||||
`fetch_connection_num` bigint(20) NOT NULL DEFAULT '0' COMMENT '消费连接数',
|
|
||||||
`expired_day` bigint(20) NOT NULL DEFAULT '0' COMMENT '过期天数',
|
|
||||||
`gmt_retain` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '保留截止时间',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '-1:可下线, 0:过期待通知, 1+:已通知待反馈',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_topic_name` (`cluster_id`,`topic_name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic过期信息表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic_metrics`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic_metrics`;
|
|
||||||
CREATE TABLE `topic_metrics` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`metrics` text COMMENT '指标数据JSON',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
KEY `idx_cluster_id_topic_name_gmt_create` (`cluster_id`,`topic_name`,`gmt_create`),
|
|
||||||
KEY `idx_gmt_create` (`gmt_create`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topicmetrics表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic_report`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic_report`;
|
|
||||||
CREATE TABLE `topic_report` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '0' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '开始上报时间',
|
|
||||||
`end_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '结束上报时间',
|
|
||||||
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_topic_name` (`cluster_id`,`topic_name`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='开启jmx采集的topic';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic_request_time_metrics`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic_request_time_metrics`;
|
|
||||||
CREATE TABLE `topic_request_time_metrics` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`metrics` text COMMENT '指标',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
KEY `idx_cluster_id_topic_name_gmt_create` (`cluster_id`,`topic_name`,`gmt_create`),
|
|
||||||
KEY `idx_gmt_create` (`gmt_create`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic请求耗时信息';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic_statistics`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic_statistics`;
|
|
||||||
CREATE TABLE `topic_statistics` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '自增id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic名称',
|
|
||||||
`offset_sum` bigint(20) NOT NULL DEFAULT '-1' COMMENT 'offset和',
|
|
||||||
`max_avg_bytes_in` double(53,2) NOT NULL DEFAULT '-1.00' COMMENT '峰值的均值流量',
|
|
||||||
`gmt_day` varchar(64) NOT NULL DEFAULT '' COMMENT '日期2020-03-30的形式',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`max_avg_messages_in` double(53,2) NOT NULL DEFAULT '-1.00' COMMENT '峰值的均值消息条数',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `uniq_cluster_id_topic_name_gmt_day` (`cluster_id`,`topic_name`,`gmt_day`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic统计信息表';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `topic_throttled_metrics`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `topic_throttled_metrics`;
|
|
||||||
CREATE TABLE `topic_throttled_metrics` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`cluster_id` bigint(20) NOT NULL DEFAULT '-1' COMMENT '集群id',
|
|
||||||
`topic_name` varchar(192) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'topic name',
|
|
||||||
`app_id` varchar(64) NOT NULL DEFAULT '' COMMENT 'app',
|
|
||||||
`produce_throttled` tinyint(8) NOT NULL DEFAULT '0' COMMENT '是否是生产耗时',
|
|
||||||
`fetch_throttled` tinyint(8) NOT NULL DEFAULT '0' COMMENT '是否是消费耗时',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
KEY `idx_cluster_id_topic_name_app_id` (`cluster_id`,`topic_name`,`app_id`),
|
|
||||||
KEY `idx_gmt_create` (`gmt_create`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='topic限流信息';
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Table structure for table `work_order`
|
|
||||||
--
|
|
||||||
|
|
||||||
-- DROP TABLE IF EXISTS `work_order`;
|
|
||||||
CREATE TABLE `work_order` (
|
|
||||||
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'id',
|
|
||||||
`type` int(16) NOT NULL DEFAULT '-1' COMMENT '工单类型',
|
|
||||||
`title` varchar(512) NOT NULL DEFAULT '' COMMENT '工单标题',
|
|
||||||
`applicant` varchar(64) NOT NULL DEFAULT '' COMMENT '申请人',
|
|
||||||
`description` text COMMENT '备注信息',
|
|
||||||
`approver` varchar(64) NOT NULL DEFAULT '' COMMENT '审批人',
|
|
||||||
`gmt_handle` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '审批时间',
|
|
||||||
`opinion` varchar(256) NOT NULL DEFAULT '' COMMENT '审批信息',
|
|
||||||
`extensions` text COMMENT '扩展信息',
|
|
||||||
`status` int(16) NOT NULL DEFAULT '0' COMMENT '工单状态, 0:待审批, 1:通过, 2:拒绝, 3:取消',
|
|
||||||
`gmt_create` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
|
|
||||||
`gmt_modify` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
|
|
||||||
PRIMARY KEY (`id`)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='工单表';
|
|
||||||
@@ -1,215 +0,0 @@
|
|||||||
<?xml version="1.0" encoding="UTF-8"?>
|
|
||||||
<configuration scan="true" scanPeriod="10 seconds">
|
|
||||||
<contextName>logback</contextName>
|
|
||||||
<property name="log.path" value="./logs" />
|
|
||||||
|
|
||||||
<!-- 彩色日志 -->
|
|
||||||
<!-- 彩色日志依赖的渲染类 -->
|
|
||||||
<conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" />
|
|
||||||
<conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" />
|
|
||||||
<conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" />
|
|
||||||
<!-- 彩色日志格式 -->
|
|
||||||
<property name="CONSOLE_LOG_PATTERN" value="${CONSOLE_LOG_PATTERN:-%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/>
|
|
||||||
|
|
||||||
<!--输出到控制台-->
|
|
||||||
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
|
|
||||||
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
|
|
||||||
<level>info</level>
|
|
||||||
</filter>
|
|
||||||
<encoder>
|
|
||||||
<Pattern>${CONSOLE_LOG_PATTERN}</Pattern>
|
|
||||||
<charset>UTF-8</charset>
|
|
||||||
</encoder>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
|
|
||||||
<!--输出到文件-->
|
|
||||||
|
|
||||||
<!-- 时间滚动输出 level为 DEBUG 日志 -->
|
|
||||||
<appender name="DEBUG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
|
||||||
<file>${log.path}/log_debug.log</file>
|
|
||||||
<!--日志文件输出格式-->
|
|
||||||
<encoder>
|
|
||||||
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
|
||||||
<charset>UTF-8</charset> <!-- 设置字符集 -->
|
|
||||||
</encoder>
|
|
||||||
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
|
|
||||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
|
||||||
<!-- 日志归档 -->
|
|
||||||
<fileNamePattern>${log.path}/log_debug_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
|
||||||
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
|
||||||
<maxFileSize>100MB</maxFileSize>
|
|
||||||
</timeBasedFileNamingAndTriggeringPolicy>
|
|
||||||
<!--日志文件保留天数-->
|
|
||||||
<maxHistory>7</maxHistory>
|
|
||||||
</rollingPolicy>
|
|
||||||
<!-- 此日志文件只记录debug级别的 -->
|
|
||||||
<filter class="ch.qos.logback.classic.filter.LevelFilter">
|
|
||||||
<level>debug</level>
|
|
||||||
<onMatch>ACCEPT</onMatch>
|
|
||||||
<onMismatch>DENY</onMismatch>
|
|
||||||
</filter>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
<!-- 时间滚动输出 level为 INFO 日志 -->
|
|
||||||
<appender name="INFO_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
|
||||||
<!-- 正在记录的日志文件的路径及文件名 -->
|
|
||||||
<file>${log.path}/log_info.log</file>
|
|
||||||
<!--日志文件输出格式-->
|
|
||||||
<encoder>
|
|
||||||
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
|
||||||
<charset>UTF-8</charset>
|
|
||||||
</encoder>
|
|
||||||
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
|
|
||||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
|
||||||
<!-- 每天日志归档路径以及格式 -->
|
|
||||||
<fileNamePattern>${log.path}/log_info_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
|
||||||
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
|
||||||
<maxFileSize>100MB</maxFileSize>
|
|
||||||
</timeBasedFileNamingAndTriggeringPolicy>
|
|
||||||
<!--日志文件保留天数-->
|
|
||||||
<maxHistory>7</maxHistory>
|
|
||||||
</rollingPolicy>
|
|
||||||
<!-- 此日志文件只记录info级别的 -->
|
|
||||||
<filter class="ch.qos.logback.classic.filter.LevelFilter">
|
|
||||||
<level>info</level>
|
|
||||||
<onMatch>ACCEPT</onMatch>
|
|
||||||
<onMismatch>DENY</onMismatch>
|
|
||||||
</filter>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
<!-- 时间滚动输出 level为 WARN 日志 -->
|
|
||||||
<appender name="WARN_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
|
||||||
<!-- 正在记录的日志文件的路径及文件名 -->
|
|
||||||
<file>${log.path}/log_warn.log</file>
|
|
||||||
<!--日志文件输出格式-->
|
|
||||||
<encoder>
|
|
||||||
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
|
||||||
<charset>UTF-8</charset> <!-- 此处设置字符集 -->
|
|
||||||
</encoder>
|
|
||||||
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
|
|
||||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
|
||||||
<fileNamePattern>${log.path}/log_warn_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
|
||||||
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
|
||||||
<maxFileSize>100MB</maxFileSize>
|
|
||||||
</timeBasedFileNamingAndTriggeringPolicy>
|
|
||||||
<!--日志文件保留天数-->
|
|
||||||
<maxHistory>7</maxHistory>
|
|
||||||
</rollingPolicy>
|
|
||||||
<!-- 此日志文件只记录warn级别的 -->
|
|
||||||
<filter class="ch.qos.logback.classic.filter.LevelFilter">
|
|
||||||
<level>warn</level>
|
|
||||||
<onMatch>ACCEPT</onMatch>
|
|
||||||
<onMismatch>DENY</onMismatch>
|
|
||||||
</filter>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
|
|
||||||
<!-- 时间滚动输出 level为 ERROR 日志 -->
|
|
||||||
<appender name="ERROR_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
|
||||||
<!-- 正在记录的日志文件的路径及文件名 -->
|
|
||||||
<file>${log.path}/log_error.log</file>
|
|
||||||
<!--日志文件输出格式-->
|
|
||||||
<encoder>
|
|
||||||
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
|
||||||
<charset>UTF-8</charset> <!-- 此处设置字符集 -->
|
|
||||||
</encoder>
|
|
||||||
<!-- 日志记录器的滚动策略,按日期,按大小记录 -->
|
|
||||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
|
||||||
<fileNamePattern>${log.path}/log_error_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
|
||||||
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
|
||||||
<maxFileSize>100MB</maxFileSize>
|
|
||||||
</timeBasedFileNamingAndTriggeringPolicy>
|
|
||||||
<!--日志文件保留天数-->
|
|
||||||
<maxHistory>7</maxHistory>
|
|
||||||
</rollingPolicy>
|
|
||||||
<!-- 此日志文件只记录ERROR级别的 -->
|
|
||||||
<filter class="ch.qos.logback.classic.filter.LevelFilter">
|
|
||||||
<level>ERROR</level>
|
|
||||||
<onMatch>ACCEPT</onMatch>
|
|
||||||
<onMismatch>DENY</onMismatch>
|
|
||||||
</filter>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
<!-- Metrics信息收集日志 -->
|
|
||||||
<appender name="COLLECTOR_METRICS_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
|
||||||
<file>${log.path}/metrics/collector_metrics.log</file>
|
|
||||||
<encoder>
|
|
||||||
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
|
||||||
<charset>UTF-8</charset>
|
|
||||||
</encoder>
|
|
||||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
|
||||||
<fileNamePattern>${log.path}/metrics/collector_metrics_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
|
||||||
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
|
||||||
<maxFileSize>100MB</maxFileSize>
|
|
||||||
</timeBasedFileNamingAndTriggeringPolicy>
|
|
||||||
<maxHistory>3</maxHistory>
|
|
||||||
</rollingPolicy>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
<!-- Metrics信息收集日志 -->
|
|
||||||
<appender name="API_METRICS_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
|
||||||
<file>${log.path}/metrics/api_metrics.log</file>
|
|
||||||
<encoder>
|
|
||||||
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
|
||||||
<charset>UTF-8</charset>
|
|
||||||
</encoder>
|
|
||||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
|
||||||
<fileNamePattern>${log.path}/metrics/api_metrics_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
|
||||||
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
|
||||||
<maxFileSize>100MB</maxFileSize>
|
|
||||||
</timeBasedFileNamingAndTriggeringPolicy>
|
|
||||||
<maxHistory>3</maxHistory>
|
|
||||||
</rollingPolicy>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
<!-- Metrics信息收集日志 -->
|
|
||||||
<appender name="SCHEDULED_TASK_LOGGER" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
|
||||||
<file>${log.path}/metrics/scheduled_tasks.log</file>
|
|
||||||
<encoder>
|
|
||||||
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n</pattern>
|
|
||||||
<charset>UTF-8</charset>
|
|
||||||
</encoder>
|
|
||||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
|
||||||
<fileNamePattern>${log.path}/metrics/scheduled_tasks_%d{yyyy-MM-dd}.%i.log</fileNamePattern>
|
|
||||||
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
|
|
||||||
<maxFileSize>100MB</maxFileSize>
|
|
||||||
</timeBasedFileNamingAndTriggeringPolicy>
|
|
||||||
<maxHistory>5</maxHistory>
|
|
||||||
</rollingPolicy>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
<logger name="COLLECTOR_METRICS_LOGGER" level="DEBUG" additivity="false">
|
|
||||||
<appender-ref ref="COLLECTOR_METRICS_LOGGER"/>
|
|
||||||
</logger>
|
|
||||||
<logger name="API_METRICS_LOGGER" level="DEBUG" additivity="false">
|
|
||||||
<appender-ref ref="API_METRICS_LOGGER"/>
|
|
||||||
</logger>
|
|
||||||
<logger name="SCHEDULED_TASK_LOGGER" level="DEBUG" additivity="false">
|
|
||||||
<appender-ref ref="SCHEDULED_TASK_LOGGER"/>
|
|
||||||
</logger>
|
|
||||||
|
|
||||||
<logger name="org.apache.ibatis" level="INFO" additivity="false" />
|
|
||||||
<logger name="org.mybatis.spring" level="INFO" additivity="false" />
|
|
||||||
<logger name="com.github.miemiedev.mybatis.paginator" level="INFO" additivity="false" />
|
|
||||||
|
|
||||||
<root level="info">
|
|
||||||
<appender-ref ref="CONSOLE" />
|
|
||||||
<appender-ref ref="DEBUG_FILE" />
|
|
||||||
<appender-ref ref="INFO_FILE" />
|
|
||||||
<appender-ref ref="WARN_FILE" />
|
|
||||||
<appender-ref ref="ERROR_FILE" />
|
|
||||||
<!--<appender-ref ref="METRICS_LOG" />-->
|
|
||||||
</root>
|
|
||||||
|
|
||||||
<!--生产环境:输出到文件-->
|
|
||||||
<!--<springProfile name="pro">-->
|
|
||||||
<!--<root level="info">-->
|
|
||||||
<!--<appender-ref ref="CONSOLE" />-->
|
|
||||||
<!--<appender-ref ref="DEBUG_FILE" />-->
|
|
||||||
<!--<appender-ref ref="INFO_FILE" />-->
|
|
||||||
<!--<appender-ref ref="ERROR_FILE" />-->
|
|
||||||
<!--<appender-ref ref="WARN_FILE" />-->
|
|
||||||
<!--</root>-->
|
|
||||||
<!--</springProfile>-->
|
|
||||||
</configuration>
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
<?xml version="1.0" encoding="UTF-8"?>
|
|
||||||
|
|
||||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
|
||||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
|
||||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
|
||||||
|
|
||||||
<parent>
|
|
||||||
<artifactId>kafka-manager</artifactId>
|
|
||||||
<groupId>com.xiaojukeji.kafka</groupId>
|
|
||||||
<version>${kafka-manager.revision}</version>
|
|
||||||
</parent>
|
|
||||||
|
|
||||||
<modelVersion>4.0.0</modelVersion>
|
|
||||||
|
|
||||||
<artifactId>distribution</artifactId>
|
|
||||||
<name>distribution</name>
|
|
||||||
<packaging>pom</packaging>
|
|
||||||
|
|
||||||
<dependencies>
|
|
||||||
<dependency>
|
|
||||||
<groupId>${project.groupId}</groupId>
|
|
||||||
<artifactId>kafka-manager-web</artifactId>
|
|
||||||
<version>${kafka-manager.revision}</version>
|
|
||||||
</dependency>
|
|
||||||
</dependencies>
|
|
||||||
|
|
||||||
<profiles>
|
|
||||||
|
|
||||||
<profile>
|
|
||||||
<id>release-kafka-manager</id>
|
|
||||||
<dependencies>
|
|
||||||
<dependency>
|
|
||||||
<groupId>${project.groupId}</groupId>
|
|
||||||
<artifactId>kafka-manager-web</artifactId>
|
|
||||||
<version>${kafka-manager.revision}</version>
|
|
||||||
</dependency>
|
|
||||||
</dependencies>
|
|
||||||
<build>
|
|
||||||
<plugins>
|
|
||||||
<plugin>
|
|
||||||
<groupId>org.apache.maven.plugins</groupId>
|
|
||||||
<artifactId>maven-assembly-plugin</artifactId>
|
|
||||||
<configuration>
|
|
||||||
<descriptors>
|
|
||||||
<descriptor>release-km.xml</descriptor>
|
|
||||||
</descriptors>
|
|
||||||
<tarLongFileMode>posix</tarLongFileMode>
|
|
||||||
</configuration>
|
|
||||||
<executions>
|
|
||||||
<execution>
|
|
||||||
<id>make-assembly</id>
|
|
||||||
<phase>install</phase>
|
|
||||||
<goals>
|
|
||||||
<goal>single</goal>
|
|
||||||
</goals>
|
|
||||||
</execution>
|
|
||||||
</executions>
|
|
||||||
</plugin>
|
|
||||||
</plugins>
|
|
||||||
<finalName>kafka-manager</finalName>
|
|
||||||
</build>
|
|
||||||
</profile>
|
|
||||||
</profiles>
|
|
||||||
</project>
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
## 说明
|
|
||||||
|
|
||||||
### 1.创建mysql数据库文件
|
|
||||||
> conf/create_mysql_table.sql
|
|
||||||
|
|
||||||
### 2. 修改配置文件
|
|
||||||
> conf/application.yml.example
|
|
||||||
> 请将application.yml.example 复制一份改名为application.yml;
|
|
||||||
> 并放在同级目录下(conf/); 并修改成自己的配置
|
|
||||||
> 这里的优先级比jar包内配置文件的默认值高;
|
|
||||||
>
|
|
||||||
|
|
||||||
### 3.启动/关闭kafka-manager
|
|
||||||
> sh bin/startup.sh 启动
|
|
||||||
>
|
|
||||||
> sh shutdown.sh 关闭
|
|
||||||
>
|
|
||||||
|
|
||||||
|
|
||||||
### 4.升级jar包
|
|
||||||
> 如果是升级, 可以看看文件 `upgrade_config.md` 的配置变更历史;
|
|
||||||
>
|
|
||||||
@@ -1,51 +0,0 @@
|
|||||||
<?xml version="1.0" encoding="UTF-8"?>
|
|
||||||
|
|
||||||
<assembly>
|
|
||||||
<id>${project.version}</id>
|
|
||||||
<includeBaseDirectory>true</includeBaseDirectory>
|
|
||||||
<formats>
|
|
||||||
<format>dir</format>
|
|
||||||
<format>tar.gz</format>
|
|
||||||
<format>zip</format>
|
|
||||||
</formats>
|
|
||||||
<fileSets>
|
|
||||||
<fileSet>
|
|
||||||
<includes>
|
|
||||||
<include>conf/**</include>
|
|
||||||
</includes>
|
|
||||||
</fileSet>
|
|
||||||
|
|
||||||
<fileSet>
|
|
||||||
<includes>
|
|
||||||
<include>bin/*</include>
|
|
||||||
</includes>
|
|
||||||
<fileMode>0755</fileMode>
|
|
||||||
</fileSet>
|
|
||||||
</fileSets>
|
|
||||||
<files>
|
|
||||||
|
|
||||||
|
|
||||||
<file>
|
|
||||||
<source>readme.md</source>
|
|
||||||
<destName>readme.md</destName>
|
|
||||||
</file>
|
|
||||||
<file>
|
|
||||||
<source>upgrade_config.md</source>
|
|
||||||
<destName>upgrade_config.md</destName>
|
|
||||||
</file>
|
|
||||||
<file>
|
|
||||||
<!--打好的jar包名称和放置目录-->
|
|
||||||
<source>../kafka-manager-web/target/kafka-manager.jar</source>
|
|
||||||
<outputDirectory>target/</outputDirectory>
|
|
||||||
</file>
|
|
||||||
</files>
|
|
||||||
|
|
||||||
<moduleSets>
|
|
||||||
<moduleSet>
|
|
||||||
<useAllReactorProjects>true</useAllReactorProjects>
|
|
||||||
<includes>
|
|
||||||
<include>com.xiaojukeji.kafka:kafka-manager-web</include>
|
|
||||||
</includes>
|
|
||||||
</moduleSet>
|
|
||||||
</moduleSets>
|
|
||||||
</assembly>
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
|
|
||||||
## 版本升级配置变更
|
|
||||||
> 本文件 从 V2.2.0 开始记录; 如果配置有变更则会填写到下文中; 如果没有,则表示无变更;
|
|
||||||
> 当您从一个很低的版本升级时候,应该依次执行中间有过变更的sql脚本
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 1.升级至`V2.2.0`版本
|
|
||||||
|
|
||||||
#### 1.mysql变更
|
|
||||||
|
|
||||||
`2.2.0`版本在`cluster`表及`logical_cluster`各增加了一个字段,因此需要执行下面的sql进行字段的增加。
|
|
||||||
|
|
||||||
```sql
|
|
||||||
# 往cluster表中增加jmx_properties字段, 这个字段会用于存储jmx相关的认证以及配置信息
|
|
||||||
ALTER TABLE `cluster` ADD COLUMN `jmx_properties` TEXT NULL COMMENT 'JMX配置' AFTER `security_properties`;
|
|
||||||
|
|
||||||
# 往logical_cluster中增加identification字段, 同时数据和原先name数据相同, 最后增加一个唯一键.
|
|
||||||
# 此后, name字段还是表示集群名称, 而identification字段表示的是集群标识, 只能是字母数字及下划线组成,
|
|
||||||
# 数据上报到监控系统时, 集群这个标识采用的字段就是identification字段, 之前使用的是name字段.
|
|
||||||
ALTER TABLE `logical_cluster` ADD COLUMN `identification` VARCHAR(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识' AFTER `name`;
|
|
||||||
|
|
||||||
UPDATE `logical_cluster` SET `identification`=`name` WHERE id>=0;
|
|
||||||
|
|
||||||
ALTER TABLE `logical_cluster` ADD INDEX `uniq_identification` (`identification` ASC);
|
|
||||||
```
|
|
||||||
|
|
||||||
### 升级至`2.3.0`版本
|
|
||||||
|
|
||||||
#### 1.mysql变更
|
|
||||||
`2.3.0`版本在`gateway_config`表增加了一个描述说明的字段,因此需要执行下面的sql进行字段的增加。
|
|
||||||
|
|
||||||
```sql
|
|
||||||
ALTER TABLE `gateway_config`
|
|
||||||
ADD COLUMN `description` TEXT NULL COMMENT '描述信息' AFTER `version`;
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
|
Before Width: | Height: | Size: 73 KiB |
|
Before Width: | Height: | Size: 7.4 KiB |
|
Before Width: | Height: | Size: 382 KiB |
|
Before Width: | Height: | Size: 270 KiB |
|
Before Width: | Height: | Size: 785 KiB |
|
Before Width: | Height: | Size: 2.5 MiB |
|
Before Width: | Height: | Size: 589 KiB |
|
Before Width: | Height: | Size: 652 KiB |
|
Before Width: | Height: | Size: 511 KiB |
|
Before Width: | Height: | Size: 672 KiB |
@@ -1,101 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## JMX-连接失败问题解决
|
|
||||||
|
|
||||||
集群正常接入Logi-KafkaManager之后,即可以看到集群的Broker列表,此时如果查看不了Topic的实时流量,或者是Broker的实时流量信息时,那么大概率就是JMX连接的问题了。
|
|
||||||
|
|
||||||
下面我们按照步骤来一步一步的检查。
|
|
||||||
|
|
||||||
### 1、问题&说明
|
|
||||||
|
|
||||||
**类型一:JMX配置未开启**
|
|
||||||
|
|
||||||
未开启时,直接到`2、解决方法`查看如何开启即可。
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
**类型二:配置错误**
|
|
||||||
|
|
||||||
`JMX`端口已经开启的情况下,有的时候开启的配置不正确,此时也会导致出现连接失败的问题。这里大概列举几种原因:
|
|
||||||
|
|
||||||
- `JMX`配置错误:见`2、解决方法`。
|
|
||||||
- 存在防火墙或者网络限制:网络通的另外一台机器`telnet`试一下看是否可以连接上。
|
|
||||||
- 需要进行用户名及密码的认证:见`3、解决方法 —— 认证的JMX`。
|
|
||||||
|
|
||||||
|
|
||||||
错误日志例子:
|
|
||||||
```
|
|
||||||
# 错误一: 错误提示的是真实的IP,这样的话基本就是JMX配置的有问题了。
|
|
||||||
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999.
|
|
||||||
java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
|
|
||||||
|
|
||||||
|
|
||||||
# 错误二:错误提示的是127.0.0.1这个IP,这个是机器的hostname配置的可能有问题。
|
|
||||||
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999.
|
|
||||||
java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2、解决方法
|
|
||||||
|
|
||||||
这里仅介绍一下比较通用的解决方式,如若有更好的方式,欢迎大家指导告知一下。
|
|
||||||
|
|
||||||
修改`kafka-server-start.sh`文件:
|
|
||||||
```
|
|
||||||
# 在这个下面增加JMX端口的配置
|
|
||||||
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
|
|
||||||
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
|
|
||||||
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
修改`kafka-run-class.sh`文件
|
|
||||||
```
|
|
||||||
# JMX settings
|
|
||||||
if [ -z "$KAFKA_JMX_OPTS" ]; then
|
|
||||||
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# JMX port to use
|
|
||||||
if [ $JMX_PORT ]; then
|
|
||||||
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### 3、解决方法 —— 认证的JMX
|
|
||||||
|
|
||||||
如果您是直接看的这个部分,建议先看一下上一节:`2、解决方法`以确保`JMX`的配置没有问题了。
|
|
||||||
|
|
||||||
在JMX的配置等都没有问题的情况下,如果是因为认证的原因导致连接不了的,此时可以使用下面介绍的方法进行解决。
|
|
||||||
|
|
||||||
**当前这块后端刚刚开发完成,可能还不够完善,有问题随时沟通。**
|
|
||||||
|
|
||||||
`Logi-KafkaManager 2.2.0+`之后的版本后端已经支持`JMX`认证方式的连接,但是还没有界面,此时我们可以往`cluster`表的`jmx_properties`字段写入`JMX`的认证信息。
|
|
||||||
|
|
||||||
这个数据是`json`格式的字符串,例子如下所示:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"maxConn": 10, # KM对单台Broker的最大JMX连接数
|
|
||||||
"username": "xxxxx", # 用户名
|
|
||||||
"password": "xxxx", # 密码
|
|
||||||
"openSSL": true, # 开启SSL, true表示开启ssl, false表示关闭
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
SQL的例子:
|
|
||||||
```sql
|
|
||||||
UPDATE cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false }' where id={xxx};
|
|
||||||
```
|
|
||||||
@@ -1,168 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 动态配置管理
|
|
||||||
|
|
||||||
## 0、目录
|
|
||||||
|
|
||||||
- 1、Topic定时同步任务
|
|
||||||
- 2、专家服务——Topic分区热点
|
|
||||||
- 3、专家服务——Topic分区不足
|
|
||||||
- 4、专家服务——Topic资源治理
|
|
||||||
- 5、账单配置
|
|
||||||
|
|
||||||
|
|
||||||
## 1、Topic定时同步任务
|
|
||||||
|
|
||||||
### 1.1、配置的用途
|
|
||||||
`Logi-KafkaManager`在设计上,所有的资源都是挂在应用(app)下面。 如果接入的Kafka集群已经存在Topic了,那么会导致这些Topic不属于任何的应用,从而导致很多管理上的不便。
|
|
||||||
|
|
||||||
因此,需要有一个方式将这些无主的Topic挂到某个应用下面。
|
|
||||||
|
|
||||||
这里提供了一个配置,会定时自动将集群无主的Topic挂到某个应用下面下面。
|
|
||||||
|
|
||||||
### 1.2、相关实现
|
|
||||||
|
|
||||||
就是一个定时任务,该任务会定期做同步的工作。具体代码的位置在`com.xiaojukeji.kafka.manager.task.dispatch.op`包下面的`SyncTopic2DB`类。
|
|
||||||
|
|
||||||
### 1.3、配置说明
|
|
||||||
|
|
||||||
**步骤一:开启该功能**
|
|
||||||
|
|
||||||
在application.yml文件中,增加如下配置,已经有该配置的话,直接把false修改为true即可
|
|
||||||
```yml
|
|
||||||
# 任务相关的开关
|
|
||||||
task:
|
|
||||||
op:
|
|
||||||
sync-topic-enabled: true # 无主的Topic定期同步到DB中
|
|
||||||
```
|
|
||||||
|
|
||||||
**步骤二:配置管理中指定挂在那个应用下面**
|
|
||||||
|
|
||||||
配置的位置:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
配置键:`SYNC_TOPIC_2_DB_CONFIG_KEY`
|
|
||||||
|
|
||||||
配置值(JSON数组):
|
|
||||||
- clusterId:需要进行定时同步的集群ID
|
|
||||||
- defaultAppId:该集群无主的Topic将挂在哪个应用下面
|
|
||||||
- addAuthority:是否需要加上权限, 默认是false。因为考虑到这个挂载只是临时的,我们不希望用户使用这个App,同时后续可能移交给真正的所属的应用,因此默认是不加上权限。
|
|
||||||
|
|
||||||
**注意,这里的集群ID,或者是应用ID不存在的话,会导致配置不生效。该任务对已经在DB中的Topic不会进行修改**
|
|
||||||
```json
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"clusterId": 1234567,
|
|
||||||
"defaultAppId": "ANONYMOUS",
|
|
||||||
"addAuthority": false
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"clusterId": 7654321,
|
|
||||||
"defaultAppId": "ANONYMOUS",
|
|
||||||
"addAuthority": false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2、专家服务——Topic分区热点
|
|
||||||
|
|
||||||
在`Region`所圈定的Broker范围内,某个Topic的Leader数在这些圈定的Broker上分布不均衡时,我们认为该Topic是存在热点的Topic。
|
|
||||||
|
|
||||||
备注:单纯的查看Leader数的分布,确实存在一定的局限性,这块欢迎贡献更多的热点定义于代码。
|
|
||||||
|
|
||||||
|
|
||||||
Topic分区热点相关的动态配置(页面在运维管控->平台管理->配置管理):
|
|
||||||
|
|
||||||
配置Key:
|
|
||||||
```
|
|
||||||
REGION_HOT_TOPIC_CONFIG
|
|
||||||
```
|
|
||||||
|
|
||||||
配置Value:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"maxDisPartitionNum": 2, # Region内Broker间的leader数差距超过2时,则认为是存在热点的Topic
|
|
||||||
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
|
|
||||||
"ignoreClusterIdList": [ # 忽略的集群
|
|
||||||
50
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3、专家服务——Topic分区不足
|
|
||||||
|
|
||||||
总流量除以分区数,超过指定值时,则我们认为存在Topic分区不足。
|
|
||||||
|
|
||||||
Topic分区不足相关的动态配置(页面在运维管控->平台管理->配置管理):
|
|
||||||
|
|
||||||
配置Key:
|
|
||||||
```
|
|
||||||
TOPIC_INSUFFICIENT_PARTITION_CONFIG
|
|
||||||
```
|
|
||||||
|
|
||||||
配置Value:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"maxBytesInPerPartitionUnitB": 3145728, # 单分区流量超过该值, 则认为分区不去
|
|
||||||
"minTopicBytesInUnitB": 1048576, # 流量低于该值的Topic不做统计
|
|
||||||
"ignoreClusterIdList": [ # 忽略的集群
|
|
||||||
50
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
## 4、专家服务——Topic资源治理
|
|
||||||
|
|
||||||
首先,我们认为在一定的时间长度内,Topic的分区offset没有任何变化的Topic,即没有数据写入的Topic,为过期的Topic。
|
|
||||||
|
|
||||||
Topic分区不足相关的动态配置(页面在运维管控->平台管理->配置管理):
|
|
||||||
|
|
||||||
配置Key:
|
|
||||||
```
|
|
||||||
EXPIRED_TOPIC_CONFIG
|
|
||||||
```
|
|
||||||
|
|
||||||
配置Value:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"minExpiredDay": 30, #过期时间大于此值才显示
|
|
||||||
"ignoreClusterIdList": [ # 忽略的集群
|
|
||||||
50
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 5、账单配置
|
|
||||||
|
|
||||||
Logi-KafkaManager除了作为Kafka运维管控平台之外,实际上还会有一些资源定价相关的功能。
|
|
||||||
|
|
||||||
当前定价方式:当月Topic的maxAvgDay天的峰值的均值流量作为Topic的使用额度。使用的额度 * 单价 * 溢价(预留buffer) 就等于当月的费用。
|
|
||||||
详细的计算逻辑见:com.xiaojukeji.kafka.manager.task.dispatch.biz.CalKafkaTopicBill; 和 com.xiaojukeji.kafka.manager.task.dispatch.biz.CalTopicStatistics;
|
|
||||||
|
|
||||||
这块在计算Topic的费用的配置如下所示:
|
|
||||||
|
|
||||||
配置Key:
|
|
||||||
```
|
|
||||||
KAFKA_TOPIC_BILL_CONFIG
|
|
||||||
```
|
|
||||||
|
|
||||||
配置Value:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"maxAvgDay": 10, # 使用额度的计算规则
|
|
||||||
"quotaRatio": 1.5, # 溢价率
|
|
||||||
"priseUnitMB": 100 # 单价,即单MB/s流量多少钱
|
|
||||||
}
|
|
||||||
```
|
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Kafka-Gateway 配置说明
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 监控系统集成——夜莺
|
|
||||||
|
|
||||||
- `Kafka-Manager`通过将 监控的数据 以及 监控的规则 都提交给夜莺,然后依赖夜莺的监控系统从而实现监控告警功能。
|
|
||||||
|
|
||||||
- 监控数据上报 & 告警规则的创建等能力已经具备。但类似查看告警历史,告警触发时的监控数据等正在集成中(暂时可以到夜莺系统进行查看),欢迎有兴趣的同学进行共建 或 贡献代码。
|
|
||||||
|
|
||||||
## 1、配置说明
|
|
||||||
|
|
||||||
```yml
|
|
||||||
# 配置文件中关于监控部分的配置
|
|
||||||
monitor:
|
|
||||||
enabled: false
|
|
||||||
n9e:
|
|
||||||
nid: 2
|
|
||||||
user-token: 123456
|
|
||||||
# 夜莺 mon监控服务 地址
|
|
||||||
mon:
|
|
||||||
base-url: http://127.0.0.1:8006
|
|
||||||
# 夜莺 transfer上传服务 地址
|
|
||||||
sink:
|
|
||||||
base-url: http://127.0.0.1:8008
|
|
||||||
# 夜莺 rdb资源服务 地址
|
|
||||||
rdb:
|
|
||||||
base-url: http://127.0.0.1:80
|
|
||||||
|
|
||||||
# enabled: 表示是否开启监控告警的功能, true: 开启, false: 不开启
|
|
||||||
# n9e.nid: 夜莺的节点ID
|
|
||||||
# n9e.user-token: 用户的密钥,在夜莺的个人设置中
|
|
||||||
# n9e.mon.base-url: 监控地址
|
|
||||||
# n9e.sink.base-url: 数据上报地址
|
|
||||||
# n9e.rdb.base-url: 用户资源中心地址
|
|
||||||
```
|
|
||||||
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 监控系统集成
|
|
||||||
|
|
||||||
- 监控系统默认与 [夜莺] (https://github.com/didi/nightingale) 进行集成;
|
|
||||||
- 对接自有的监控系统需要进行简单的二次开发,即实现部分监控告警模块的相关接口即可;
|
|
||||||
- 集成会有两块内容,一个是指标数据上报的集成,还有一个是监控告警规则的集成;
|
|
||||||
|
|
||||||
## 1、指标数据上报集成
|
|
||||||
|
|
||||||
仅完成这一步的集成之后,即可将监控数据上报到监控系统中,此时已能够在自己的监控系统进行监控告警规则的配置了。
|
|
||||||
|
|
||||||
**步骤一:实现指标上报的接口**
|
|
||||||
|
|
||||||
- 按照自己内部监控系统的数据格式要求,将数据进行组装成符合自己内部监控系统要求的数据进行上报,具体的可以参考夜莺集成的实现代码。
|
|
||||||
- 至于会上报哪些指标,可以查看有哪些地方调用了该接口。
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**步骤二:相关配置修改**
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**步骤三:开启上报任务**
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
## 2、监控告警规则集成
|
|
||||||
|
|
||||||
完成**1、指标数据上报集成**之后,即可在自己的监控系统进行监控告警规则的配置了。完成该步骤的集成之后,可以在`Logi-KafkaManager`中进行监控告警规则的增删改查等等。
|
|
||||||
|
|
||||||
大体上和**1、指标数据上报集成**一致,
|
|
||||||
|
|
||||||
**步骤一:实现相关接口**
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
实现完成步骤一之后,接下来的步骤和**1、指标数据上报集成**中的步骤二、步骤三一致,都需要进行相关配置的修改即可。
|
|
||||||
|
|
||||||
|
|
||||||
## 3、总结
|
|
||||||
|
|
||||||
简单介绍了一下监控告警的集成,嫌麻烦的同学可以仅做 **1、指标数据上报集成** 这一节的内容即可满足一定场景下的需求。
|
|
||||||
|
|
||||||
|
|
||||||
**集成过程中,有任何觉得文档没有说清楚的地方或者建议,欢迎入群交流,也欢迎贡献代码,觉得好也辛苦给个star。**
|
|
||||||
@@ -1,27 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 升级至`2.2.0`版本
|
|
||||||
|
|
||||||
`2.2.0`版本在`cluster`表及`logical_cluster`各增加了一个字段,因此需要执行下面的sql进行字段的增加。
|
|
||||||
|
|
||||||
```sql
|
|
||||||
# 往cluster表中增加jmx_properties字段, 这个字段会用于存储jmx相关的认证以及配置信息
|
|
||||||
ALTER TABLE `cluster` ADD COLUMN `jmx_properties` TEXT NULL COMMENT 'JMX配置' AFTER `security_properties`;
|
|
||||||
|
|
||||||
# 往logical_cluster中增加identification字段, 同时数据和原先name数据相同, 最后增加一个唯一键.
|
|
||||||
# 此后, name字段还是表示集群名称, 而identification字段表示的是集群标识, 只能是字母数字及下划线组成,
|
|
||||||
# 数据上报到监控系统时, 集群这个标识采用的字段就是identification字段, 之前使用的是name字段.
|
|
||||||
ALTER TABLE `logical_cluster` ADD COLUMN `identification` VARCHAR(192) NOT NULL DEFAULT '' COMMENT '逻辑集群标识' AFTER `name`;
|
|
||||||
|
|
||||||
UPDATE `logical_cluster` SET `identification`=`name` WHERE id>=0;
|
|
||||||
|
|
||||||
ALTER TABLE `logical_cluster` ADD INDEX `uniq_identification` (`identification` ASC);
|
|
||||||
```
|
|
||||||
|
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 升级至`2.3.0`版本
|
|
||||||
|
|
||||||
`2.3.0`版本在`gateway_config`表增加了一个描述说明的字段,因此需要执行下面的sql进行字段的增加。
|
|
||||||
|
|
||||||
```sql
|
|
||||||
ALTER TABLE `gateway_config`
|
|
||||||
ADD COLUMN `description` TEXT NULL COMMENT '描述信息' AFTER `version`;
|
|
||||||
```
|
|
||||||
@@ -1,41 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 使用`MySQL 8`
|
|
||||||
|
|
||||||
感谢 [herry-hu](https://github.com/herry-hu) 提供的方案。
|
|
||||||
|
|
||||||
|
|
||||||
当前因为无法同时兼容`MySQL 8`与`MySQL 5.7`,因此代码中默认的版本还是`MySQL 5.7`。
|
|
||||||
|
|
||||||
|
|
||||||
当前如需使用`MySQL 8`,则需按照下述流程进行简单修改代码。
|
|
||||||
|
|
||||||
|
|
||||||
- Step1. 修改application.yml中的MySQL驱动类
|
|
||||||
```shell
|
|
||||||
|
|
||||||
# 将driver-class-name后面的驱动类修改为:
|
|
||||||
# driver-class-name: com.mysql.jdbc.Driver
|
|
||||||
driver-class-name: com.mysql.cj.jdbc.Driver
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
- Step2. 修改MySQL依赖包
|
|
||||||
```shell
|
|
||||||
# 将根目录下面的pom.xml文件依赖的`MySQL`依赖包版本调整为
|
|
||||||
|
|
||||||
<dependency>
|
|
||||||
<groupId>mysql</groupId>
|
|
||||||
<artifactId>mysql-connector-java</artifactId>
|
|
||||||
# <version>5.1.41</version>
|
|
||||||
<version>8.0.20</version>
|
|
||||||
</dependency>
|
|
||||||
```
|
|
||||||
|
|
||||||
@@ -1,53 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 如何增加上报监控系统指标?
|
|
||||||
|
|
||||||
## 0、前言
|
|
||||||
|
|
||||||
LogiKM是 **一站式`Apache Kafka`集群指标监控与运维管控平台** ,当前会将消费Lag,Topic流量等指标上报到监控系统中,从而方便用户在监控系统中对这些指标配置监控告警规则,进而达到监控自身客户端是否正常的目的。
|
|
||||||
|
|
||||||
那么,如果我们想增加一个新的监控指标,应该如何做呢,比如我们想监控Broker的流量,监控Broker的存活信息,监控集群Controller个数等等。
|
|
||||||
|
|
||||||
在具体介绍之前,我们大家都知道,Kafka监控相关的信息,基本都存储于Broker、Jmx以及ZK中。当前LogiKM也已经具备从这三个地方获取数据的基本能力,因此基于LogiKM我们再获取其他指标,总体上还是非常方便的。
|
|
||||||
|
|
||||||
这里我们就以已经获取到的Topic流量信息为例,看LogiKM如何实现Topic指标的获取并上报的。
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1、确定指标位置
|
|
||||||
|
|
||||||
基于对Kafka的了解,我们知道Topic流量信息这个指标是存储于Jmx中的,因此我们需要从Jmx中获取。大家如果对于自己所需要获取的指标存储在何处不太清楚的,可以加入我们维护的Kafka中文社区(README中有二维码)中今天沟通交流。
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2、指标获取
|
|
||||||
|
|
||||||
Topic流量指标的获取详细见图中说明。
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3、指标上报
|
|
||||||
|
|
||||||
上一步我们已经采集到Topic流量指标了,下一步就是将该指标上报到监控系统,这块只需要按照监控系统要求的格式,将数据上报即可。
|
|
||||||
|
|
||||||
LogiKM中有一个monitor模块,具体的如下图所示:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
## 4、补充说明
|
|
||||||
|
|
||||||
监控系统对接的相关内容见:
|
|
||||||
|
|
||||||
[监控系统集成](./monitor_system_integrate_with_self.md)
|
|
||||||
|
|
||||||
[监控系统集成例子——集成夜莺](./monitor_system_integrate_with_n9e.md)
|
|
||||||
@@ -1,107 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 配置说明
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
server:
|
|
||||||
port: 8080 # 服务端口
|
|
||||||
tomcat:
|
|
||||||
accept-count: 1000
|
|
||||||
max-connections: 10000
|
|
||||||
max-threads: 800
|
|
||||||
min-spare-threads: 100
|
|
||||||
|
|
||||||
spring:
|
|
||||||
application:
|
|
||||||
name: kafkamanager
|
|
||||||
datasource:
|
|
||||||
kafka-manager: # 数据库连接配置
|
|
||||||
jdbc-url: jdbc:mysql://127.0.0.1:3306/kafka_manager?characterEncoding=UTF-8&serverTimezone=GMT%2B8 #数据库的地址
|
|
||||||
username: admin # 用户名
|
|
||||||
password: admin # 密码
|
|
||||||
driver-class-name: com.mysql.jdbc.Driver
|
|
||||||
main:
|
|
||||||
allow-bean-definition-overriding: true
|
|
||||||
|
|
||||||
profiles:
|
|
||||||
active: dev # 启用的配置
|
|
||||||
servlet:
|
|
||||||
multipart:
|
|
||||||
max-file-size: 100MB
|
|
||||||
max-request-size: 100MB
|
|
||||||
|
|
||||||
logging:
|
|
||||||
config: classpath:logback-spring.xml
|
|
||||||
|
|
||||||
custom:
|
|
||||||
idc: cn # 部署的数据中心, 忽略该配置, 后续会进行删除
|
|
||||||
jmx:
|
|
||||||
max-conn: 10 # 和单台 broker 的最大JMX连接数
|
|
||||||
store-metrics-task:
|
|
||||||
community:
|
|
||||||
broker-metrics-enabled: true # 社区部分broker metrics信息收集开关, 关闭之后metrics信息将不会进行收集及写DB
|
|
||||||
topic-metrics-enabled: true # 社区部分topic的metrics信息收集开关, 关闭之后metrics信息将不会进行收集及写DB
|
|
||||||
didi:
|
|
||||||
app-topic-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
|
||||||
topic-request-time-metrics-enabled: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
|
||||||
topic-throttled-metrics: false # 滴滴埋入的指标, 社区AK不存在该指标,因此默认关闭
|
|
||||||
save-days: 7 #指标在DB中保持的天数,-1表示永久保存,7表示保存近7天的数据
|
|
||||||
|
|
||||||
# 任务相关的开关
|
|
||||||
task:
|
|
||||||
op:
|
|
||||||
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
|
|
||||||
order-auto-exec: # 工单自动化审批线程的开关
|
|
||||||
topic-enabled: false # Topic工单自动化审批开关, false:关闭自动化审批, true:开启
|
|
||||||
app-enabled: false # App工单自动化审批开关, false:关闭自动化审批, true:开启
|
|
||||||
|
|
||||||
account: # ldap相关的配置, 社区版本暂时支持不够完善,可以先忽略,欢迎贡献代码对这块做优化
|
|
||||||
ldap:
|
|
||||||
|
|
||||||
kcm: # 集群升级部署相关的功能,需要配合夜莺及S3进行使用,这块我们后续专门补充一个文档细化一下,牵扯到kcm_script.sh脚本的修改
|
|
||||||
enabled: false # 默认关闭
|
|
||||||
storage:
|
|
||||||
base-url: http://127.0.0.1 # 存储地址
|
|
||||||
n9e:
|
|
||||||
base-url: http://127.0.0.1:8004 # 夜莺任务中心的地址
|
|
||||||
user-token: 12345678 # 夜莺用户的token
|
|
||||||
timeout: 300 # 集群任务的超时时间,单位秒
|
|
||||||
account: root # 集群任务使用的账号
|
|
||||||
script-file: kcm_script.sh # 集群任务的脚本
|
|
||||||
|
|
||||||
monitor: # 监控告警相关的功能,需要配合夜莺进行使用
|
|
||||||
enabled: false # 默认关闭,true就是开启
|
|
||||||
n9e:
|
|
||||||
nid: 2
|
|
||||||
user-token: 1234567890
|
|
||||||
mon:
|
|
||||||
# 夜莺 mon监控服务 地址
|
|
||||||
base-url: http://127.0.0.1:8032
|
|
||||||
sink:
|
|
||||||
# 夜莺 transfer上传服务 地址
|
|
||||||
base-url: http://127.0.0.1:8006
|
|
||||||
rdb:
|
|
||||||
# 夜莺 rdb资源服务 地址
|
|
||||||
base-url: http://127.0.0.1:80
|
|
||||||
|
|
||||||
# enabled: 表示是否开启监控告警的功能, true: 开启, false: 不开启
|
|
||||||
# n9e.nid: 夜莺的节点ID
|
|
||||||
# n9e.user-token: 用户的密钥,在夜莺的个人设置中
|
|
||||||
# n9e.mon.base-url: 监控地址
|
|
||||||
# n9e.sink.base-url: 数据上报地址
|
|
||||||
# n9e.rdb.base-url: 用户资源中心地址
|
|
||||||
|
|
||||||
notify: # 通知的功能
|
|
||||||
kafka: # 默认通知发送到kafka的指定Topic中
|
|
||||||
cluster-id: 95 # Topic的集群ID
|
|
||||||
topic-name: didi-kafka-notify # Topic名称
|
|
||||||
order: # 部署的KM的地址
|
|
||||||
detail-url: http://127.0.0.1
|
|
||||||
```
|
|
||||||
@@ -1,93 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 安装手册
|
|
||||||
|
|
||||||
## 1、环境依赖
|
|
||||||
|
|
||||||
如果是以Release包进行安装的,则仅安装`Java`及`MySQL`即可。如果是要先进行源码包进行打包,然后再使用,则需要安装`Maven`及`Node`环境。
|
|
||||||
|
|
||||||
- `Java 8+`(运行环境需要)
|
|
||||||
- `MySQL 5.7`(数据存储)
|
|
||||||
- `Maven 3.5+`(后端打包依赖)
|
|
||||||
- `Node 10+`(前端打包依赖)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2、获取安装包
|
|
||||||
|
|
||||||
**1、Release直接下载**
|
|
||||||
|
|
||||||
这里如果觉得麻烦,然后也不想进行二次开发,则可以直接下载Release包,下载地址:[Github Release包下载地址](https://github.com/didi/Logi-KafkaManager/releases)
|
|
||||||
|
|
||||||
如果觉得Github的下载地址太慢了,也可以进入`Logi-KafkaManager`的用户群获取,群地址在README中。
|
|
||||||
|
|
||||||
|
|
||||||
**2、源代码进行打包**
|
|
||||||
|
|
||||||
下载好代码之后,进入`Logi-KafkaManager`的主目录,执行`mvn -Prelease-kafka-manager -Dmaven.test.skip=true clean install -U `命令即可,
|
|
||||||
执行完成之后会在`distribution/target`目录下面生成一个`kafka-manager-*.tar.gz`。
|
|
||||||
和一个`kafka-manager-*.zip` 文件,随便任意一个压缩包都可以;
|
|
||||||
当然此时同级目录有一个已经解压好的文件夹;
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. 解压安装包
|
|
||||||
解压完成后; 在文件目录中可以看到有`kafka-manager/conf/create_mysql_table.sql` 有个mysql初始化文件
|
|
||||||
先初始化DB
|
|
||||||
|
|
||||||
|
|
||||||
## 4、MySQL-DB初始化
|
|
||||||
|
|
||||||
执行[create_mysql_table.sql](../../distribution/conf/create_mysql_table.sql)中的SQL命令,从而创建所需的MySQL库及表,默认创建的库名是`logi_kafka_manager`。
|
|
||||||
|
|
||||||
```
|
|
||||||
# 示例:
|
|
||||||
mysql -uXXXX -pXXX -h XXX.XXX.XXX.XXX -PXXXX < ./create_mysql_table.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5.修该配置
|
|
||||||
请将`conf/application.yml.example` 文件复制一份出来命名为`application.yml` 放在同级目录:conf/application.yml ;
|
|
||||||
并且修改配置; 当然不修改的话 就会用默认的配置;
|
|
||||||
至少 mysql配置成自己的吧
|
|
||||||
|
|
||||||
|
|
||||||
## 6、启动/关闭
|
|
||||||
解压包中有启动和关闭脚本
|
|
||||||
`kafka-manager/bin/shutdown.sh`
|
|
||||||
`kafka-manager/bin/startup.sh`
|
|
||||||
|
|
||||||
执行 sh startup.sh 启动
|
|
||||||
执行 sh shutdown.sh 关闭
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### 6、使用
|
|
||||||
|
|
||||||
本地启动的话,访问`http://localhost:8080`,输入帐号及密码(默认`admin/admin`)进行登录。更多参考:[kafka-manager 用户使用手册](../user_guide/user_guide_cn.md)
|
|
||||||
|
|
||||||
### 7. 升级
|
|
||||||
|
|
||||||
如果是升级版本,请查看文件 [kafka-manager 升级手册](../../distribution/upgrade_config.md)
|
|
||||||
在您下载的启动包(V2.5及其后)中也有记录,在 kafka-manager/upgrade_config.md 中
|
|
||||||
|
|
||||||
|
|
||||||
### 8. 在IDE中启动
|
|
||||||
> 如果想参与开发或者想在IDE中启动的话
|
|
||||||
> 先执行 `mvn -Dmaven.test.skip=true clean install -U `
|
|
||||||
>
|
|
||||||
> 然后这个时候可以选择去 [pom.xml](../../pom.xml) 中将`kafka-manager-console`模块注释掉;
|
|
||||||
> 注释是因为每次install的时候都会把前端文件`kafka-manager-console`重新打包进`kafka-manager-web`
|
|
||||||
>
|
|
||||||
> 完事之后,只需要直接用IDE启动运行`kafka-manager-web`模块中的
|
|
||||||
> com.xiaojukeji.kafka.manager.web.MainApplication main方法就行了
|
|
||||||
@@ -1,94 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## nginx配置-安装手册
|
|
||||||
|
|
||||||
# 一、独立部署
|
|
||||||
|
|
||||||
请参考参考:[kafka-manager 安装手册](install_guide_cn.md)
|
|
||||||
|
|
||||||
# 二、nginx配置
|
|
||||||
|
|
||||||
## 1、独立部署配置
|
|
||||||
|
|
||||||
```
|
|
||||||
#nginx 根目录访问配置如下
|
|
||||||
location / {
|
|
||||||
proxy_pass http://ip:port;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 2、前后端分离&配置多个静态资源
|
|
||||||
|
|
||||||
以下配置解决`nginx代理多个静态资源`,实现项目前后端分离,版本更新迭代。
|
|
||||||
|
|
||||||
### 1、源码下载
|
|
||||||
|
|
||||||
根据所需版本下载对应代码,下载地址:[Github 下载地址](https://github.com/didi/Logi-KafkaManager)
|
|
||||||
|
|
||||||
### 2、修改webpack.config.js 配置文件
|
|
||||||
|
|
||||||
修改`kafka-manager-console`模块 `webpack.config.js`
|
|
||||||
以下所有<font color='red'>xxxx</font>为nginx代理路径和打包静态文件加载前缀,<font color='red'>xxxx</font>可根据需求自行更改。
|
|
||||||
|
|
||||||
```
|
|
||||||
cd kafka-manager-console
|
|
||||||
vi webpack.config.js
|
|
||||||
|
|
||||||
# publicPath默认打包方式根目录下,修改为nginx代理访问路径。
|
|
||||||
let publicPath = '/xxxx';
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3、打包
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
npm cache clean --force && npm install
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
ps:如果打包过程中报错,运行`npm install clipboard@2.0.6`,相反请忽略!
|
|
||||||
|
|
||||||
### 4、部署
|
|
||||||
|
|
||||||
#### 1、前段静态文件部署
|
|
||||||
|
|
||||||
静态资源 `../kafka-manager-web/src/main/resources/templates`
|
|
||||||
|
|
||||||
上传到指定目录,目前以`root目录`做demo
|
|
||||||
|
|
||||||
#### 2、上传jar包并启动,请参考:[kafka-manager 安装手册](install_guide_cn.md)
|
|
||||||
|
|
||||||
#### 3、修改nginx 配置
|
|
||||||
|
|
||||||
```
|
|
||||||
location /xxxx {
|
|
||||||
# 静态文件存放位置
|
|
||||||
alias /root/templates;
|
|
||||||
try_files $uri $uri/ /xxxx/index.html;
|
|
||||||
index index.html;
|
|
||||||
}
|
|
||||||
|
|
||||||
location /api {
|
|
||||||
proxy_pass http://ip:port;
|
|
||||||
}
|
|
||||||
#后代端口建议使用/api,如果冲突可以使用以下配置
|
|
||||||
#location /api/v2 {
|
|
||||||
# proxy_pass http://ip:port;
|
|
||||||
#}
|
|
||||||
#location /api/v1 {
|
|
||||||
# proxy_pass http://ip:port;
|
|
||||||
#}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -1,49 +0,0 @@
|
|||||||
|
|
||||||
---
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**一站式`Apache Kafka`集群指标监控与运维管控平台**
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 集群接入
|
|
||||||
|
|
||||||
## 主要概念讲解
|
|
||||||
面对大规模集群、业务场景复杂的情况,引入Region、逻辑集群的概念
|
|
||||||
- Region:划分部分Broker作为一个 Region,用Region定义资源划分的单位,提高扩展性和隔离性。如果部分Topic异常也不会影响大面积的Broker
|
|
||||||
- 逻辑集群:逻辑集群由部分Region组成,便于对大规模集群按照业务划分、保障能力进行管理
|
|
||||||

|
|
||||||
|
|
||||||
集群的接入总共需要三个步骤,分别是:
|
|
||||||
1. 接入物理集群:填写机器地址、安全协议等配置信息,接入真实的物理集群
|
|
||||||
2. 创建Region:将部分Broker划分为一个Region
|
|
||||||
3. 创建逻辑集群:逻辑集群由部分Region组成,可根据业务划分、保障等级来创建相应的逻辑集群
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
**备注:接入集群需要2、3两步是因为普通用户的视角下,看到的都是逻辑集群,如果没有2、3两步,那么普通用户看不到任何信息。**
|
|
||||||
|
|
||||||
|
|
||||||
## 1、接入物理集群
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
如上图所示,填写集群信息,然后点击确定,即可完成集群的接入。因为考虑到分布式部署,添加集群之后,需要稍等**`1分钟`**才可以在界面上看到集群的详细信息。
|
|
||||||
|
|
||||||
## 2、创建Region
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
如上图所示,填写Region信息,然后点击确定,即可完成Region的创建。
|
|
||||||
|
|
||||||
备注:Region即为Broker的集合,可以按照业务需要,将Broker归类,从而创建相应的Region。
|
|
||||||
|
|
||||||
## 3、创建逻辑集群
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
如上图所示,填写逻辑集群信息,然后点击确定,即可完成逻辑集群的创建。
|
|
||||||
|
Before Width: | Height: | Size: 261 KiB |
|
Before Width: | Height: | Size: 240 KiB |
|
Before Width: | Height: | Size: 195 KiB |
|
Before Width: | Height: | Size: 124 KiB |
|
Before Width: | Height: | Size: 105 KiB |
|
Before Width: | Height: | Size: 94 KiB |
|
Before Width: | Height: | Size: 181 KiB |
|
Before Width: | Height: | Size: 65 KiB |
|
Before Width: | Height: | Size: 166 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 78 KiB |
|
Before Width: | Height: | Size: 48 KiB |
|
Before Width: | Height: | Size: 55 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 297 KiB |
|
Before Width: | Height: | Size: 189 KiB |
|
Before Width: | Height: | Size: 173 KiB |
|
Before Width: | Height: | Size: 197 KiB |
|
Before Width: | Height: | Size: 244 KiB |
|
Before Width: | Height: | Size: 118 KiB |
|
Before Width: | Height: | Size: 150 KiB |
|
Before Width: | Height: | Size: 177 KiB |
|
Before Width: | Height: | Size: 276 KiB |
|
Before Width: | Height: | Size: 257 KiB |
|
Before Width: | Height: | Size: 153 KiB |
|
Before Width: | Height: | Size: 189 KiB |
|
Before Width: | Height: | Size: 187 KiB |
|
Before Width: | Height: | Size: 92 KiB |
|
Before Width: | Height: | Size: 116 KiB |
|
Before Width: | Height: | Size: 166 KiB |
|
Before Width: | Height: | Size: 158 KiB |
|
Before Width: | Height: | Size: 124 KiB |
|
Before Width: | Height: | Size: 209 KiB |