[Groonga-commit] groonga/groonga at 4e7108f [master] doc: remove deprecated description

Back to archive index

Kouhei Sutou null+****@clear*****
Thu Aug 13 23:05:53 JST 2015


Kouhei Sutou	2015-08-13 23:05:53 +0900 (Thu, 13 Aug 2015)

  New Revision: 4e7108f8febc62e60b20fa1e0c8e04e5be578ea8
  https://github.com/groonga/groonga/commit/4e7108f8febc62e60b20fa1e0c8e04e5be578ea8

  Message:
    doc: remove deprecated description

  Removed files:
    doc/source/example/reference/tokenizers/token-regexp-get-beginning-of-text.log
    doc/source/example/reference/tokenizers/token-regexp-get-end-of-text.log
  Modified files:
    doc/source/reference/tokenizers.rst

  Deleted: doc/source/example/reference/tokenizers/token-regexp-get-beginning-of-text.log (+0 -60) 100644
===================================================================
--- doc/source/example/reference/tokenizers/token-regexp-get-beginning-of-text.log    2015-08-13 22:57:45 +0900 (8e230f6)
+++ /dev/null
@@ -1,60 +0,0 @@
-Execution example::
-
-  tokenize TokenRegexp "\\A/home/alice/" NormalizerAuto --mode GET
-  # [
-  #   [
-  #     0, 
-  #     1337566253.89858, 
-  #     0.000355720520019531
-  #   ], 
-  #   [
-  #     {
-  #       "position": 0, 
-  #       "value": "￯"
-  #     }, 
-  #     {
-  #       "position": 1, 
-  #       "value": "/h"
-  #     }, 
-  #     {
-  #       "position": 2, 
-  #       "value": "ho"
-  #     }, 
-  #     {
-  #       "position": 3, 
-  #       "value": "om"
-  #     }, 
-  #     {
-  #       "position": 4, 
-  #       "value": "me"
-  #     }, 
-  #     {
-  #       "position": 5, 
-  #       "value": "e/"
-  #     }, 
-  #     {
-  #       "position": 6, 
-  #       "value": "/a"
-  #     }, 
-  #     {
-  #       "position": 7, 
-  #       "value": "al"
-  #     }, 
-  #     {
-  #       "position": 8, 
-  #       "value": "li"
-  #     }, 
-  #     {
-  #       "position": 9, 
-  #       "value": "ic"
-  #     }, 
-  #     {
-  #       "position": 10, 
-  #       "value": "ce"
-  #     }, 
-  #     {
-  #       "position": 11, 
-  #       "value": "e/"
-  #     }
-  #   ]
-  # ]

  Deleted: doc/source/example/reference/tokenizers/token-regexp-get-end-of-text.log (+0 -32) 100644
===================================================================
--- doc/source/example/reference/tokenizers/token-regexp-get-end-of-text.log    2015-08-13 22:57:45 +0900 (6981b6f)
+++ /dev/null
@@ -1,32 +0,0 @@
-Execution example::
-
-  tokenize TokenRegexp "\\.txt\\z" NormalizerAuto --mode GET
-  # [
-  #   [
-  #     0, 
-  #     1337566253.89858, 
-  #     0.000355720520019531
-  #   ], 
-  #   [
-  #     {
-  #       "position": 0, 
-  #       "value": "\\."
-  #     }, 
-  #     {
-  #       "position": 1, 
-  #       "value": ".t"
-  #     }, 
-  #     {
-  #       "position": 2, 
-  #       "value": "tx"
-  #     }, 
-  #     {
-  #       "position": 3, 
-  #       "value": "xt"
-  #     }, 
-  #     {
-  #       "position": 5, 
-  #       "value": "￰"
-  #     }
-  #   ]
-  # ]

  Modified: doc/source/reference/tokenizers.rst (+0 -21)
===================================================================
--- doc/source/reference/tokenizers.rst    2015-08-13 22:57:45 +0900 (e8d9595)
+++ doc/source/reference/tokenizers.rst    2015-08-13 23:05:53 +0900 (cf99dec)
@@ -515,24 +515,3 @@ index text:
 .. groonga-command
 .. include:: ../example/reference/tokenizers/token-regexp-add.log
 .. tokenize TokenRegexp "/home/alice/test.txt" NormalizerAuto --mode ADD
-
-The beginning of text mark is used for the beginning of text search by
-``\A``. If you use ``TokenRegexp`` for tokenizing query,
-``TokenRegexp`` adds the beginning of text mark (``U+FFEF``) as the
-first token. The beginning of text mark must be appeared at the first,
-you can get results of the beginning of text search.
-
-.. groonga-command
-.. include:: ../example/reference/tokenizers/token-regexp-get-beginning-of-text.log
-.. tokenize TokenRegexp "\\A/home/alice/" NormalizerAuto --mode GET
-
-The end of text mark is used for the end of text search by ``\z``.
-If you use ``TokenRegexp`` for tokenizing query, ``TokenRegexp`` adds
-the end of text mark (``U+FFF0``) as the last token. The end of text
-mark must be appeared at the end, you can get results of the end of
-text search.
-
-.. groonga-command
-.. include:: ../example/reference/tokenizers/token-regexp-get-end-of-text.log
-.. tokenize TokenRegexp "\\.txt\\z" NormalizerAuto --mode GET
-
-------------- next part --------------
HTML����������������������������...
Télécharger 



More information about the Groonga-commit mailing list
Back to archive index