KITAITI Makoto
null+****@clear*****
Sat Jan 30 18:53:54 JST 2016
KITAITI Makoto 2016-01-30 18:53:54 +0900 (Sat, 30 Jan 2016) New Revision: 5279df9d52cd1e05abd0327ab52218790bd99fe2 https://github.com/droonga/droonga.org/commit/5279df9d52cd1e05abd0327ab52218790bd99fe2 Merged f7d3990: Merge pull request #28 from KitaitiMakoto/no-jq Message: Remove jq command from sample codes Modified files: tutorial/1.1.2/add-replica/index.md tutorial/1.1.2/benchmark/index.md tutorial/1.1.2/dump-restore/index.md tutorial/1.1.2/groonga/index.md Modified: tutorial/1.1.2/add-replica/index.md (+8 -8) =================================================================== --- tutorial/1.1.2/add-replica/index.md 2016-01-30 18:45:16 +0900 (4985de1) +++ tutorial/1.1.2/add-replica/index.md 2016-01-30 18:53:54 +0900 (3077d08) @@ -95,7 +95,7 @@ Currently, the new node doesn't work as a node of the existing cluster. You can confirm that, via the `system.status` command: ~~~ -$ curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -107,7 +107,7 @@ $ curl "http://node0:10041/droonga/system/status" | jq "." }, "reporter": "..." } -$ curl "http://node2:10041/droonga/system/status" | jq "." +$ curl "http://node2:10041/droonga/system/status" { "nodes": { "node2:10031/droonga": { @@ -193,7 +193,7 @@ With that, a new replica node has successfully joined to your Droonga cluster. You can confirm that they are working as a cluster, via the `system.status` command: ~~~ -$ curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -216,7 +216,7 @@ Equivalence of all replicas can be confirmed with the command `system.statistics ~~~ (on node0) -$ curl "http://node0:10041/droonga/system/statistics/object/count/per-volume?output\[\]=total" | jq "." +$ curl "http://node0:10041/droonga/system/statistics/object/count/per-volume?output\[\]=total" { "node0:10031/droonga.000": { "total": 540 @@ -268,7 +268,7 @@ Now, the node has been successfully unjoined from the cluster. You can confirm that the `node2` is unjoined, via the `system.status` command: ~~~ -$ curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -280,7 +280,7 @@ $ curl "http://node0:10041/droonga/system/status" | jq "." }, "reporter": "..." } -$ curl "http://node2:10041/droonga/system/status" | jq "." +$ curl "http://node2:10041/droonga/system/status" { "nodes": { "node2:10031/droonga": { @@ -316,7 +316,7 @@ Now the node has been gone. You can confirm that via the `system.status` command: ~~~ -$ curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -365,7 +365,7 @@ Finally a Droonga cluster constructed with two nodes `node0` and `node2` is here You can confirm that, via the `system.status` command: ~~~ -$ curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { Modified: tutorial/1.1.2/benchmark/index.md (+8 -9) =================================================================== --- tutorial/1.1.2/benchmark/index.md 2016-01-30 18:45:16 +0900 (113c17c) +++ tutorial/1.1.2/benchmark/index.md 2016-01-30 18:53:54 +0900 (6812c71) @@ -262,8 +262,7 @@ Make sure that Droonga's HTTP server is actualy listening the port `10042` and i ~~~ (on node0) -% sudo apt-get install -y jq -% curl "http://node0:10042/droonga/system/status" | jq . +% curl "http://node0:10042/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -352,7 +351,7 @@ Assume that you use a computer `node3` as the client: (on node3) % sudo apt-get update % sudo apt-get -y upgrade -% sudo apt-get install -y ruby curl jq +% sudo apt-get install -y ruby curl % sudo gem install drnbench ~~~ @@ -368,7 +367,7 @@ First, you have to determine the cache hit rate. If you have any existing service based on Groonga, you can get the actual cache hit rate of the Groonga database via `status` command, like: ~~~ -% curl "http://node0:10041/d/status" | jq . +% curl "http://node0:10041/d/status" [ [ 0, @@ -536,7 +535,7 @@ Then you'll get the reference result of the Groonga. To confirm the result is valid, check the response of the `status` command: ~~~ -% curl "http://node0:10041/d/status" | jq . +% curl "http://node0:10041/d/status" [ [ 0, @@ -596,7 +595,7 @@ Make sure that only one node is actually detected: ~~~ (on node3) -% curl "http://node0:10042/droonga/system/status" | jq . +% curl "http://node0:10042/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -631,7 +630,7 @@ It may help you to analyze what is the bottleneck. And, to confirm the result is valid, you should check the actual cache hit rate: ~~~ -% curl "http://node0:10042/statistics/cache" | jq . +% curl "http://node0:10042/statistics/cache" { "hitRatio": 49.830717830807124, "nHits": 66968, @@ -660,7 +659,7 @@ Make sure that two nodes are actually detected: ~~~ (on node3) -% curl "http://node0:10042/droonga/system/status" | jq . +% curl "http://node0:10042/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -721,7 +720,7 @@ Make sure that three nodes are actually detected: ~~~ (on node3) -% curl "http://node0:10042/droonga/system/status" | jq . +% curl "http://node0:10042/droonga/system/status" { "nodes": { "node0:10031/droonga": { Modified: tutorial/1.1.2/dump-restore/index.md (+17 -17) =================================================================== --- tutorial/1.1.2/dump-restore/index.md 2016-01-30 18:45:16 +0900 (9321467) +++ tutorial/1.1.2/dump-restore/index.md 2016-01-30 18:53:54 +0900 (9af5f3e) @@ -137,7 +137,7 @@ Make it empty with these commands: ~~~ $ endpoint="http://node0:10041" -$ curl "$endpoint/d/table_remove?name=Location" | jq "." +$ curl "$endpoint/d/table_remove?name=Location" [ [ 0, @@ -146,7 +146,7 @@ $ curl "$endpoint/d/table_remove?name=Location" | jq "." ], true ] -$ curl "$endpoint/d/table_remove?name=Store" | jq "." +$ curl "$endpoint/d/table_remove?name=Store" [ [ 0, @@ -155,7 +155,7 @@ $ curl "$endpoint/d/table_remove?name=Store" | jq "." ], true ] -$ curl "$endpoint/d/table_remove?name=Term" | jq "." +$ curl "$endpoint/d/table_remove?name=Term" [ [ 0, @@ -171,7 +171,7 @@ Let's confirm it. You'll see empty result by `select` and `table_list` commands, like: ~~~ -$ curl "$endpoint/d/table_list" | jq "." +$ curl "$endpoint/d/table_list" [ [ 0, @@ -215,9 +215,9 @@ $ curl "$endpoint/d/table_list" | jq "." ] ] ] -$ curl -X DELETE "$endpoint/cache" | jq "." +$ curl -X DELETE "$endpoint/cache" true -$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" [ [ 0, @@ -260,9 +260,9 @@ Note: Then the data is completely restored. Confirm it: ~~~ -$ curl -X DELETE "$endpoint/cache" | jq "." +$ curl -X DELETE "$endpoint/cache" true -$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" [ [ 0, @@ -363,7 +363,7 @@ $ ps aux | grep droonga-engine-service | grep -v grep | wc -l Now you'll see two separate clusters like: ~~~ -$ curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -372,7 +372,7 @@ $ curl "http://node0:10041/droonga/system/status" | jq "." }, "reporter": "..." } -$ curl "http://node1:10041/droonga/system/status" | jq "." +$ curl "http://node1:10041/droonga/system/status" { "nodes": { "node1:10031/droonga": { @@ -391,9 +391,9 @@ $ endpoint="http://node1:10041" $ curl "$endpoint/d/table_remove?name=Location" $ curl "$endpoint/d/table_remove?name=Store" $ curl "$endpoint/d/table_remove?name=Term" -$ curl -X DELETE "http://node1:10041/cache" | jq "." +$ curl -X DELETE "http://node1:10041/cache" true -$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" [ [ 0, @@ -409,9 +409,9 @@ $ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | ] ] ] -$ curl -X DELETE "http://node0:10041/cache" | jq "." +$ curl -X DELETE "http://node0:10041/cache" true -$ curl "http://node0:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "http://node0:10041/d/select?table=Store&output_columns=name&limit=10" [ [ 0, @@ -508,9 +508,9 @@ Note that you must specify the host name (or the IP address) of the working mach After that contents of these two clusters are completely synchronized. Confirm it: ~~~ -$ curl -X DELETE "http://node1:10041/cache" | jq "." +$ curl -X DELETE "http://node1:10041/cache" true -$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" [ [ 0, @@ -581,7 +581,7 @@ After that there is just one cluster - yes, it's the initial state. (Of course you will have to wait for a while until services are completely restarted.) ~~~ -$ curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { Modified: tutorial/1.1.2/groonga/index.md (+19 -19) =================================================================== --- tutorial/1.1.2/groonga/index.md 2016-01-30 18:45:16 +0900 (4175061) +++ tutorial/1.1.2/groonga/index.md 2016-01-30 18:53:54 +0900 (7dc843a) @@ -240,7 +240,7 @@ Let's make sure that the cluster works, by a Droonga command, `system.status`. You can see the result via HTTP, like: ~~~ -$ curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -258,7 +258,7 @@ The result says that two nodes are working correctly. Because it is a cluster, another endpoint returns same result. ~~~ -$ curl "http://node1:10041/droonga/system/status" | jq "." +$ curl "http://node1:10041/droonga/system/status" { "nodes": { "node0:10031/droonga": { @@ -325,7 +325,7 @@ To create a new table `Store`, you just have to send a GET request for the `tabl ~~~ $ endpoint="http://node0:10041" -$ curl "$endpoint/d/table_create?name=Store&flags=TABLE_PAT_KEY&key_type=ShortText" | jq "." +$ curl "$endpoint/d/table_create?name=Store&flags=TABLE_PAT_KEY&key_type=ShortText" [ [ 0, @@ -344,7 +344,7 @@ OK, now the table has been created successfully. Let's see it by the `table_list` command: ~~~ -$ curl "$endpoint/d/table_list" | jq "." +$ curl "$endpoint/d/table_list" [ [ 0, @@ -403,7 +403,7 @@ $ curl "$endpoint/d/table_list" | jq "." Because it is a cluster, another endpoint returns same result. ~~~ -$ curl "http://node1:10041/d/table_list" | jq "." +$ curl "http://node1:10041/d/table_list" [ [ 0, @@ -462,7 +462,7 @@ $ curl "http://node1:10041/d/table_list" | jq "." Next, create new columns `name` and `location` to the `Store` table by the `column_create` command, like: ~~~ -$ curl "$endpoint/d/column_create?table=Store&name=name&flags=COLUMN_SCALAR&type=ShortText" | jq "." +$ curl "$endpoint/d/column_create?table=Store&name=name&flags=COLUMN_SCALAR&type=ShortText" [ [ 0, @@ -471,7 +471,7 @@ $ curl "$endpoint/d/column_create?table=Store&name=name&flags=COLUMN_SCALAR&type ], true ] -$ curl "$endpoint/d/column_create?table=Store&name=location&flags=COLUMN_SCALAR&type=WGS84GeoPoint" | jq "." +$ curl "$endpoint/d/column_create?table=Store&name=location&flags=COLUMN_SCALAR&type=WGS84GeoPoint" [ [ 0, @@ -485,7 +485,7 @@ $ curl "$endpoint/d/column_create?table=Store&name=location&flags=COLUMN_SCALAR& Create indexes also. ~~~ -$ curl "$endpoint/d/table_create?name=Term&flags=TABLE_PAT_KEY&key_type=ShortText&default_tokenizer=TokenBigram&normalizer=NormalizerAuto" | jq "." +$ curl "$endpoint/d/table_create?name=Term&flags=TABLE_PAT_KEY&key_type=ShortText&default_tokenizer=TokenBigram&normalizer=NormalizerAuto" [ [ 0, @@ -494,7 +494,7 @@ $ curl "$endpoint/d/table_create?name=Term&flags=TABLE_PAT_KEY&key_type=ShortTex ], true ] -$ curl "$endpoint/d/column_create?table=Term&name=store_name&flags=COLUMN_INDEX|WITH_POSITION&type=Store&source=name" | jq "." +$ curl "$endpoint/d/column_create?table=Term&name=store_name&flags=COLUMN_INDEX|WITH_POSITION&type=Store&source=name" [ [ 0, @@ -503,7 +503,7 @@ $ curl "$endpoint/d/column_create?table=Term&name=store_name&flags=COLUMN_INDEX| ], true ] -$ curl "$endpoint/d/table_create?name=Location&flags=TABLE_PAT_KEY&key_type=WGS84GeoPoint" | jq "." +$ curl "$endpoint/d/table_create?name=Location&flags=TABLE_PAT_KEY&key_type=WGS84GeoPoint" [ [ 0, @@ -512,7 +512,7 @@ $ curl "$endpoint/d/table_create?name=Location&flags=TABLE_PAT_KEY&key_type=WGS8 ], true ] -$ curl "$endpoint/d/column_create?table=Location&name=store&flags=COLUMN_INDEX&type=Store&source=location" | jq "." +$ curl "$endpoint/d/column_create?table=Location&name=store&flags=COLUMN_INDEX&type=Store&source=location" [ [ 0, @@ -526,7 +526,7 @@ $ curl "$endpoint/d/column_create?table=Location&name=store&flags=COLUMN_INDEX&t Let's confirm results: ~~~ -$ curl "$endpoint/d/table_list" | jq "." +$ curl "$endpoint/d/table_list" [ [ 0, @@ -600,7 +600,7 @@ $ curl "$endpoint/d/table_list" | jq "." ] ] ] -$ curl "$endpoint/d/column_list?table=Store" | jq "." +$ curl "$endpoint/d/column_list?table=Store" [ [ 0, @@ -674,7 +674,7 @@ $ curl "$endpoint/d/column_list?table=Store" | jq "." ] ] ] -$ curl "$endpoint/d/column_list?table=Term" | jq "." +$ curl "$endpoint/d/column_list?table=Term" [ [ 0, @@ -740,7 +740,7 @@ $ curl "$endpoint/d/column_list?table=Term" | jq "." ] ] ] -$ curl "$endpoint/d/column_list?table=Location" | jq "." +$ curl "$endpoint/d/column_list?table=Location" [ [ 0, @@ -864,7 +864,7 @@ stores.json: Then, send it as a POST request of the `load` command, like: ~~~ -$ curl --data "@stores.json" "$endpoint/d/load?table=Store" | jq "." +$ curl --data "@stores.json" "$endpoint/d/load?table=Store" [ [ 0, @@ -886,7 +886,7 @@ OK, all data is now ready. As the starter, let's select initial ten records with the `select` command: ~~~ -$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" [ [ 0, @@ -942,7 +942,7 @@ $ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." Of course you can specify conditions via the `query` option: ~~~ -$ curl "$endpoint/d/select?table=Store&query=Columbus&match_columns=name&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&query=Columbus&match_columns=name&output_columns=name&limit=10" [ [ 0, @@ -969,7 +969,7 @@ $ curl "$endpoint/d/select?table=Store&query=Columbus&match_columns=name&output_ ] ] ] -$ curl "$endpoint/d/select?table=Store&filter=name@'Ave'&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&filter=name@'Ave'&output_columns=name&limit=10" [ [ 0, -------------- next part -------------- HTML����������������������������... Télécharger