Fork me on GitHub

User Guide

Documentation Status Pypi Version Travis CI Development Status Python version License

中文版README

Introduction

The Alicloud log service provides with Web and SDK flavor to operate log service and analyzie logs. To make it more convinient to do automation, we release this command line interface (CLI).

Brief

Alicloud log service command line console, support almost all operations as web. It also supports incomplete log query check and query cross multiple pages. It could even do project settings copy cross multiple regions.

Major Features

  • Support almost all 50+ REST API of log service.
  • Multiple account support to support cross region operation.
  • Log query incomplete check and automatically query cross pagination.
  • Multiple confidential storage types, from file, commandline to env variables.
  • Support command line based or file based inputs, complete formation validations.
  • Support JMES filter to do further process on results, e.g. select specific fields from json.
  • Cross platforms support (Windows, Linux and Mac), Python based and friendly to Py2 and Py3 even Pypy. Support Pip installation.

Installation

Operation System

The CLI supports below operation system:

  • Windows
  • Mac
  • Linux

Supported Version

Python 2.6, 2.7, 3.3, 3.4, 3.5, 3.6, PyPy, PyPy3

Installation Method

Run below command to install the CLI:

> pip install -U aliyun-log-cli

Note

On mac it’s recommended to use pip3 to install the CLI.

> brew install python3
> pip3 install -U aliyun-log-cli

if you encounter errors like OSError: [Errno 1] Operation not permitted, try to use option --user to install:

> pip3 install -U aliyun-log-cli --user

Alicloud ECS which may have limited internet access

You could try the mirrors of local network provider, for Alicloud ECS, you can try below noe:

pip/pip3 install -U aliyun-log-cli --index http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com

Offline Installation

Since 0.1.12, we provide offline package for mac x64 and linux x64 platform.

Follow below ways to install it. 1. download the package from release page 2. unzip it to a local folder, like cli_packages, you can see some whl files inside it. 3. if you don’t have pip, install pip first:

python pip-10.0.1-py2.py3-none-any.whl/pip install --no-index cli_packages/pip-10.0.1-py2.py3-none-any.whl
  1. install the CLI:
pip install aliyun-log-cli --no-index --find-links=cli_packages
  1. verify it

    > aliyunlog --version
    

FAQ of Installation

  1. Encoutering errr TLSV1_ALERT_PROTOCOL_VERSION when installing CLI:
> pip install aliyun-log-cli

Collecting aliyun-log-cli
  Could not fetch URL https://pypi.python.org/simple/aliyun-log-cli/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) - skipping
  Could not find a version that satisfies the requirement aliyun-log-cli (from versions: )
No matching distribution found for aliyun-log-cli

Solution: Please upgrade pip and retry:

pip install pip -U
  1. On Linux/Mac, cannot find command aliyunlog?

it’s caused by the missing of shell of aliyunlog, you could make one by yourself.

2.1. find python path:

for linux or mac:

which python

2.2. create a shell script named aliyunlog with below content and allow to execute it. And put it into path folder:

#!<python path here with ! ahead>
import re
import sys
from pkg_resources import load_entry_point

if __name__ == '__main__':
    sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
    sys.exit(
        load_entry_point('aliyun-log-cli', 'console_scripts', 'aliyunlog')()
    )

for linux or mac, it could be put under /usr/bin/.

2.3. verify it

> aliyunlog --version
  1. Fail to install module regex?

Refer to below link to install python-devel via yun, apt-get or manually. https://rpmfind.net/linux/rpm2html/search.php?query=python-devel

Full Usage list

Run below command to get the full usage list:

> aliyunlog --help

Note: aliyun command is deprecated to prevent conflict with Universal Ali-cloud CLI.

it will show the full usage.

Note aliyunlog is recommended in case the aliyun conflict with others.

Configure CLI

Refer to Configure CLI.

Input and Output

Inputs

  1. Normally case:
> aliyunlog log get_logs --request="{\"topic\": \"\", \"logstore\": \"logstore1\", \"project\": \"dlq-test-cli-123\", \"toTime\": \"2018-01-01 10:10:10\", \"offset\": \"0\", \"query\": \"*\", \"line\": \"10\", \"fromTime\": \"2018-01-01 08:08:08\", \"reverse\":\"false\"}"
  1. Input via file: You could store the content of one parameter into a file and pass it via the command line with prefix file://:
> aliyunlog log get_logs --request="file://./get_logs.json"

the content in file get_logs.json as below. Note: the \ is unnecessary to escape the “.

{
  "topic": "",
  "logstore": "logstore1",
  "project": "project1",
  "toTime": "2018-01-01 11:11:11",
  "offset": "0",
  "query": "*",
  "line": "10",
  "fromTime": "2018-01-01 10:10:10",
  "reverse": "true"
}

Parameter Validation

  • Mandatory check: if one mandatory parameter is missed, it will report error with usage info.
  • Format of parameter’s value will be validated. e.g. int, bool, string list, special data structure.
  • for boolean, it support:
  • true (case insensitive), T, 1
  • false (case insensitive), F, 0
  • String list support as [“s1”, “s2”]

Output

  1. For operations like Create, Update and Delete, there’s no output except the exit code is 0 which means success.
  2. For operations like Get and List, it will output in json format.
  3. For errors, it will report in json format as below:
{
  "errorCode":"...",
  "errorMessage":"..."
}

Filter output

It’s supported to filter output via JMES:

Examples:

> aliyunlog log get_logs ...

which outputs:

[ {"__source__": "ip1", "key": "log1"}, {"__source__": "ip2", "key": "log2"} ]

You could use below --jmes-filter to break log into each line:

> aliyunlog log get_logs ... --jmes-filter="join('\n', map(&to_string(@), @))"

output:

{"__source__": "ip1", "key": "log1"}
{"__source__": "ip2", "key": "log2"}

Further Process

You could use >> to store the output to a file. or you may want to process the output using your own cmd. For example, there’s another way to if you may want to break the logs into each line. you could append thd command with a | on linux/unix:

| python2 -c "from __future__ import print_function;import json;map(lambda x: print(json.dumps(x).encode('utf8')), json.loads(raw_input()));"
or
| python3 -c "import json;list(map(lambda x: print(json.dumps(x)), json.loads(input())));"

e.g.

aliyunlog log get_log .... | | python2 -c "from __future__ import print_function;import json;map(lambda x: print(json.dumps(x).encode('utf8')), json.loads(raw_input()));" >> data.txt

Command Reference

Command Specification

1. aliyunlog log <subcommand> [parameters | global options]
2. aliyunlog configure <access_id> <access-key> <endpoint>
3. aliyunlog [--help | --version]

Alias

There’s also an alias aliyunlog for the CLI in case the command aliyun conflicts with others.

1. aliyunlog log <subcommand> [parameters | global options]
2. aliyunlog configure <access_id> <access-key> <endpoint>
3. aliyunlog [--help | --version]

Subcommand and parameters

Actually, the CLI leverage aliyun-log-python-sdk, which maps the command into the methods of aliyun.log.LogClient. The parameters of command line is mapped to the parameters of methods. For the detail spec of parameters, please refer to the Mapped Python SDK API Spec

Examples:

def create_logstore(self, project_name, logstore_name, ttl=2, shard_count=30):

Mapped to CLI:

> aliyunlog log create_logstore
  --project_name=<value>
  --logstore_name=<value>
  [--ttl=<value>]
  [--shard_count=<value>]

Global options

All the commands support below optional global options:

[--access-id=<value>]
[--access-key=<value>]
[--region-endpoint=<value>]
[--client-name=<value>]
[--jmes-filter=<value>]

Command categories

  1. Project management
  2. Logstore management
  3. Shard management
  4. Machine group management
  5. Logtail config management
  6. Machine group and Logtail Config Mapping
  7. Index management
  8. Cursor management
  9. Logs write and consume
  10. Shipper management
  11. Consumer group management
  12. Elasticsearch data migration

  1. Project management

  • list_project
  • create_project
  • get_project
  • delete_project
  • copy_project
  • copy all configurations including logstore, logtail, and index config from project to another project which could be in different region.
> aliyunlog log copy_project --from_project="p1" --to_project="p1" --to_client="account2"
  • Note: to_client is another account configured via aliyunlog configure, it’s OK to pass main or not to copy inside the same region.
  • Refer to Copy project settings cross regions to learn more.

  1. Logstore management

  • create_logstore
  • delete_logstore
  • get_logstore
  • update_logstore
  • list_logstore

  1. Shard management

  • list_shards
  • split_shard
  • merge_shard

  1. Machine group management

  • create_machine_group
  • Format of partial parameter:
{
 "machine_list": [
   "machine1",
   "machine2"
 ],
 "machine_type": "userdefined",
 "group_name": "group_name2",
 "group_type": "Armory",
 "group_attribute": {
   "externalName": "ex name",
   "groupTopic": "topic x"
 }
}
  • delete_machine_group
  • update_machine_group
  • get_machine_group
  • list_machine_group
  • list_machines

  1. Logtail config management

  • create_logtail_config
  • 参考Create Logtail Configuration了解如何创建各种格式的Logtail配置.
  • update_logtail_config
  • delete_logtail_config
  • get_logtail_config
  • list_logtail_config

  1. Machine group and Logtail Config Mapping

  • apply_config_to_machine_group
  • remove_config_to_machine_group
  • get_machine_group_applied_configs
  • get_config_applied_machine_groups

  1. Index management

  • create_index
  • Format of partial parameter:
{
 "keys": {
   "f1": {
     "caseSensitive": false,
     "token": [
       ",",
       " ",
       "\"",
       "\"",
       ";",
       "=",
       "(",
       ")",
       "[",
       "]",
       "{",
       "}",
       "?",
       "@",
       "&",
       "<",
       ">",
       "/",
       ":",
       "\n",
       "\t"
     ],
     "type": "text",
     "doc_value": true
   },
   "f2": {
     "doc_value": true,
     "type": "long"
   }
 },
 "storage": "pg",
 "ttl": 2,
 "index_mode": "v2",
 "line": {
   "caseSensitive": false,
   "token": [
     ",",
     " ",
     "\"",
     "\"",
     ";",
     "=",
     "(",
     ")",
     "[",
     "]",
     "{",
     "}",
     "?",
     "@",
     "&",
     "<",
     ">",
     "/",
     ":",
     "\n",
     "\t"
   ]
 }
}
  • update_index
  • delete_index
  • get_index_config
  • list_topics

  1. Cursor management

  • get_cursor
  • get_cursor_time
  • get_previous_cursor_time
  • get_begin_cursor
  • get_end_cursor

  1. Logs write and consume

  • put_logs
  • Format of parameter:
{
"project": "dlq-test-cli-35144",
"logstore": "logstore1",
"topic": "topic1",
"source": "source1",
"logtags": [
  [
    "tag1",
    "v1"
  ],
  [
    "tag2",
    "v2"
  ]
],
"hashKey": "1231231234",
"logitems": [
  {
    "timestamp": 1510579341,
    "contents": [
      [
        "key1",
        "v1"
      ],
      [
        "key2",
        "v2"
      ]
    ]
  },
  {
    "timestamp": 1510579341,
    "contents": [
      [
        "key3",
        "v3"
      ],
      [
        "key4",
        "v4"
      ]
    ]
  }
]
}
  • get_logs
  • Format of parameter:
{
"topic": "",
"logstore": "logstore1",
"project": "dlq-test-cli-35144",
"toTime": "2018-01-01 11:11:11",
"offset": "0",
"query": "*",
"line": "10",
"fromTime": "2018-01-01 10:10:10",
"reverse": "true"
}
  • It will fetch all data when line is passed as -1. But if have large volume of data exceeding 1GB, better to use get_log_all
  • get_log_all
  • this API is similar as get_logs, but it will fetch data iteratively and output them by chunk. It’s used for large volume of data fetching.
  • get_histograms
  • pull_logs
  • pull_log
  • this API is similar as pull_logs, but it allow readable parameter and allow to fetch data iteratively and output them by chunk. It’s used for large volume of data fetching.
  • pull_log_dump
  • this API will dump data from all shards to local files concurrently.

  1. Shipper management

  • create_shipper
  • Format of partial parameter:
{
"oss_bucket": "dlq-oss-test1",
"oss_prefix": "sls",
"oss_role_arn": "acs:ram::1234:role/aliyunlogdefaultrole",
"buffer_interval": 300,
"buffer_mb": 128,
"compress_type": "snappy"
}
  • update_shipper
  • delete_shipper
  • get_shipper_config
  • list_shipper
  • get_shipper_tasks
  • retry_shipper_tasks

  1. Consumer group management

  • create_consumer_group
  • update_consumer_group
  • delete_consumer_group
  • list_consumer_group
  • update_check_point
  • get_check_point

  1. Elasticsearch data migration

Troubleshooting

By default, CLI store erros or warnings at ~/aliyunlogcli.log, it’s also configurable via file ~/.aliyunlogcli, section __loggging__ to adjust the logging level and location:

[__logging__]
filename=  # default: ~/aliyunlogcli.log, Rotated when hit filebytes
filebytes=   # Deafult: 104857600 (100MB), file size of each log before rotation, Unit: Bytes
backupcount= # Default: 5, file backup file
#filemode=  # deprecated
format=    # default: %(asctime)s %(levelname)s %(filename)s:%(lineno)d %(funcName)s %(message)s
datefmt=   # default: "%Y-%m-%d %H:%M:%S", could be strftime() compitable date/time formatting string
level=     # default: warn, could be: info, error, fatal, critical, debug

Other resources

  1. Alicloud Log Service homepage:https://www.alibabacloud.com/product/log-service
  2. Alicloud Log Service doc:https://www.alibabacloud.com/help/product/28958.htm
  3. Alicloud Log Python SDK doc: http://aliyun-log-python-sdk.readthedocs.io/
  4. for any issues, please submit support tickets