Mapped SDK API¶
Request and Config Class¶
GetHistogramsRequest ([project, logstore, …]) |
The request used to get histograms of a query from log. |
GetLogsRequest ([project, logstore, …]) |
The request used to get logs by a query from log. |
GetProjectLogsRequest ([project, query]) |
The request used to get logs by a query from log cross multiple logstores. |
ListTopicsRequest ([project, logstore, …]) |
The request used to get topics of a query from log. |
ListLogstoresRequest ([project]) |
The request used to list log store from log. |
PutLogsRequest ([project, logstore, topic, …]) |
The request used to send data to log. |
LogtailConfigGenerator |
Generator of Logtial config |
PluginConfigDetail (logstoreName, configName, …) |
The logtail config for simple mode |
SeperatorFileConfigDetail (logstoreName, …) |
The logtail config for separator mode |
SimpleFileConfigDetail (logstoreName, …[, …]) |
The logtail config for simple mode |
FullRegFileConfigDetail (logstoreName, …[, …]) |
The logtail config for full regex mode |
JsonFileConfigDetail (logstoreName, …[, …]) |
The logtail config for json mode |
ApsaraFileConfigDetail (logstoreName, …[, …]) |
The logtail config for Apsara mode |
SyslogConfigDetail (logstoreName, configName, tag) |
The logtail config for syslog mode |
MachineGroupDetail ([group_name, …]) |
The machine group detail info |
IndexConfig ([ttl, line_config, …]) |
The index config of a logstore |
OssShipperConfig (oss_bucket, oss_prefix, …) |
A oss ship config |
OdpsShipperConfig (odps_endpoint, …[, …]) |
Odps shipper config |
ShipperTask (task_id, task_status, …) |
A shipper task |
Project¶
list_project ([offset, size]) |
list the project Unsuccessful opertaion will cause an LogException. |
create_project (project_name, project_des) |
Create a project Unsuccessful opertaion will cause an LogException. |
get_project (project_name) |
get project Unsuccessful opertaion will cause an LogException. |
delete_project (project_name) |
delete project Unsuccessful opertaion will cause an LogException. |
copy_project (from_project, to_project[, …]) |
copy project, logstore, machine group and logtail config to target project, expecting the target project doesn’t contain same named logstores as source project |
Logstore¶
copy_logstore (from_project, from_logstore, …) |
copy logstore, index, logtail config to target logstore, machine group are not included yet. |
list_logstore (project_name[, …]) |
list the logstore in a projectListLogStoreResponse Unsuccessful opertaion will cause an LogException. |
create_logstore (project_name, logstore_name) |
create log store Unsuccessful opertaion will cause an LogException. |
get_logstore (project_name, logstore_name) |
get the logstore meta info Unsuccessful opertaion will cause an LogException. |
update_logstore (project_name, logstore_name) |
update the logstore meta info Unsuccessful opertaion will cause an LogException. |
delete_logstore (project_name, logstore_name) |
delete log store Unsuccessful opertaion will cause an LogException. |
list_topics (request) |
List all topics in a logstore. |
Index¶
create_index (project_name, logstore_name, …) |
create index for a logstore Unsuccessful opertaion will cause an LogException. |
update_index (project_name, logstore_name, …) |
update index for a logstore Unsuccessful opertaion will cause an LogException. |
delete_index (project_name, logstore_name) |
delete index of a logstore Unsuccessful opertaion will cause an LogException. |
get_index_config (project_name, logstore_name) |
get index config detail of a logstore Unsuccessful opertaion will cause an LogException. |
Logtail Config¶
create_logtail_config (project_name, …) |
create logtail config in a project Unsuccessful opertaion will cause an LogException. |
update_logtail_config (project_name, …) |
update logtail config in a project Unsuccessful opertaion will cause an LogException. |
delete_logtail_config (project_name, config_name) |
delete logtail config in a project Unsuccessful opertaion will cause an LogException. |
get_logtail_config (project_name, config_name) |
get logtail config in a project Unsuccessful opertaion will cause an LogException. |
list_logtail_config (project_name[, offset, size]) |
list logtail config name in a project Unsuccessful opertaion will cause an LogException. |
Machine Group¶
create_machine_group (project_name, group_detail) |
create machine group in a project Unsuccessful opertaion will cause an LogException. |
delete_machine_group (project_name, group_name) |
delete machine group in a project Unsuccessful opertaion will cause an LogException. |
update_machine_group (project_name, group_detail) |
update machine group in a project Unsuccessful opertaion will cause an LogException. |
get_machine_group (project_name, group_name) |
get machine group in a project Unsuccessful opertaion will cause an LogException. |
list_machine_group (project_name[, offset, size]) |
list machine group names in a project Unsuccessful opertaion will cause an LogException. |
list_machines (project_name, group_name[, …]) |
list machines in a machine group Unsuccessful opertaion will cause an LogException. |
Apply Logtail Config¶
apply_config_to_machine_group (project_name, …) |
apply a logtail config to a machine group Unsuccessful opertaion will cause an LogException. |
remove_config_to_machine_group (project_name, …) |
remove a logtail config to a machine group Unsuccessful opertaion will cause an LogException. |
get_machine_group_applied_configs (…) |
get the logtail config names applied in a machine group Unsuccessful opertaion will cause an LogException. |
get_config_applied_machine_groups (…) |
get machine group names where the logtail config applies to Unsuccessful opertaion will cause an LogException. |
Shard¶
list_shards (project_name, logstore_name) |
list the shard meta of a logstore Unsuccessful opertaion will cause an LogException. |
split_shard (project_name, logstore_name, …) |
split a readwrite shard into two shards Unsuccessful opertaion will cause an LogException. |
merge_shard (project_name, logstore_name, shardId) |
split two adjacent readwrite hards into one shards Unsuccessful opertaion will cause an LogException. |
Cursor¶
get_cursor (project_name, logstore_name, …) |
Get cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException. |
get_cursor_time (project_name, logstore_name, …) |
Get cursor time from log service Unsuccessful opertaion will cause an LogException. |
get_previous_cursor_time (project_name, …) |
Get previous cursor time from log service. |
get_begin_cursor (project_name, …) |
Get begin cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException. |
get_end_cursor (project_name, logstore_name, …) |
Get end cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException. |
Logs¶
put_logs (request) |
Put logs to log service. |
pull_logs (project_name, logstore_name, …) |
batch pull log data from log service Unsuccessful opertaion will cause an LogException. |
pull_log (project_name, logstore_name, …[, …]) |
batch pull log data from log service using time-range Unsuccessful opertaion will cause an LogException. |
pull_log_dump (project_name, logstore_name, …) |
dump all logs seperatedly line into file_path, file_path, the time parameters are log received time on server side. |
get_log (project, logstore, from_time, to_time) |
Get logs from log service. |
get_logs (request) |
Get logs from log service. |
get_log_all (project, logstore, from_time, …) |
Get logs from log service. |
get_histograms (request) |
Get histograms of requested query from log service. |
get_project_logs (request) |
Get logs from log service. |
Consumer group¶
create_consumer_group (project, logstore, …) |
create consumer group |
update_consumer_group (project, logstore, …) |
Update consumer group |
delete_consumer_group (project, logstore, …) |
Delete consumer group |
list_consumer_group (project, logstore) |
List consumer group |
update_check_point (project, logstore, …[, …]) |
Update check point |
get_check_point (project, logstore, …[, shard]) |
Get check point |
Dashboard¶
create_dashboard (project, detail) |
Create Dashboard. |
update_dashboard (project, detail) |
Update Dashboard. |
delete_dashboard (project, entity) |
Delete Dashboard. |
get_dashboard (project, entity) |
Get Dashboard. |
list_dashboard (project[, offset, size]) |
list the Dashboard, get first 100 items by default Unsuccessful opertaion will cause an LogException. |
Alert¶
create_alert (project, detail) |
Create Alert. |
update_alert (project, detail) |
Update Alert. |
delete_alert (project, entity) |
Delete Alert. |
get_alert (project, entity) |
Get Alert. |
list_alert (project[, offset, size]) |
list the Alert, get first 100 items by default Unsuccessful opertaion will cause an LogException. |
Savedsearch¶
create_savedsearch (project, detail) |
Create Savedsearch. |
update_savedsearch (project, detail) |
Update Savedsearch. |
delete_savedsearch (project, entity) |
Delete Savedsearch. |
get_savedsearch (project, entity) |
Get Savedsearch. |
list_savedsearch (project[, offset, size]) |
list the Savedsearch, get first 100 items by default Unsuccessful opertaion will cause an LogException. |
Shipper¶
create_shipper (project_name, logstore_name, …) |
create odps/oss shipper for every type, it only allowed one shipper Unsuccessful opertaion will cause an LogException. |
update_shipper (project_name, logstore_name, …) |
update odps/oss shipper for every type, it only allowed one shipper Unsuccessful opertaion will cause an LogException. |
delete_shipper (project_name, logstore_name, …) |
delete odps/oss shipper Unsuccessful opertaion will cause an LogException. |
get_shipper_config (project_name, …) |
get odps/oss shipper Unsuccessful opertaion will cause an LogException. |
list_shipper (project_name, logstore_name) |
list odps/oss shipper Unsuccessful opertaion will cause an LogException. |
get_shipper_tasks (project_name, …[, …]) |
get odps/oss shipper tasks in a certain time range Unsuccessful opertaion will cause an LogException. |
retry_shipper_tasks (project_name, …) |
retry failed tasks , only the failed task can be retried Unsuccessful opertaion will cause an LogException. |
ES Migration¶
MigrationManager ([hosts, indexes, query, …]) |
MigrationManager, migrate data from elasticsearch to aliyun log service |
Definitions¶
-
class
aliyun.log.
LogClient
(endpoint, accessKeyId, accessKey, securityToken=None, source=None)[source]¶ Construct the LogClient with endpoint, accessKeyId, accessKey.
Parameters: - endpoint (string) – log service host name, for example, ch-hangzhou.log.aliyuncs.com or https://cn-beijing.log.aliyuncs.com
- accessKeyId (string) – aliyun accessKeyId
- accessKey (string) – aliyun accessKey
-
apply_config_to_machine_group
(project_name, config_name, group_name)[source]¶ apply a logtail config to a machine group Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name to apply
- group_name (string) – the machine group name
Returns: ApplyConfigToMachineGroupResponse
Raise: LogException
-
arrange_shard
(project, logstore, count)[source]¶ arrange shard to the expected read-write count to a larger one.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- count (int) – expected read-write shard count. should be larger than the current one.
Returns: ’‘
Raise: LogException
-
copy_data
(project, logstore, from_time, to_time=None, to_client=None, to_project=None, to_logstore=None, shard_list=None, batch_size=500, compress=True, new_topic=None, new_source=None)[source]¶ copy data from one logstore to another one (could be the same or in different region), the time is log received time on server side.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- from_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (string/int) – curosr value, default is “end”, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_client (LogClient) – logclient instance, if empty will use source client
- to_project (string) – project name, if empty will use source project
- to_logstore (string) – logstore name, if empty will use source logstore
- shard_list (string) – shard number list. could be comma seperated list or range: 1,20,31-40
- batch_size (int) – batch size to fetch the data in each iteration. by default it’s 500
- compress (bool) – if use compression, by default it’s True
- new_topic (string) – overwrite the copied topic with the passed one
- new_source (string) – overwrite the copied source with the passed one
Returns: LogResponse {“total_count”: 30, “shards”: {0: 10, 1: 20} })
-
copy_logstore
(from_project, from_logstore, to_logstore, to_project=None, to_client=None)[source]¶ copy logstore, index, logtail config to target logstore, machine group are not included yet. the target logstore will be crated if not existing
Parameters: - from_project (string) – project name
- from_logstore (string) – logstore name
- to_logstore (string) – target logstore name
- to_project (string) – target project name, copy to same project if not being specified, will try to create it if not being specified
- to_client (LogClient) – logclient instance, use it to operate on the “to_project” if being specified for cross region purpose
Returns:
-
copy_project
(from_project, to_project, to_client=None, copy_machine_group=False)[source]¶ copy project, logstore, machine group and logtail config to target project, expecting the target project doesn’t contain same named logstores as source project
Parameters: - from_project (string) – project name
- to_project (string) – project name
- to_client (LogClient) – logclient instance
- copy_machine_group (bool) – if copy machine group resources, False by default.
Returns: None
-
create_alert
(project, detail)¶ Create Alert. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: CreateEntityResponse
Raise: LogException
-
create_consumer_group
(project, logstore, consumer_group, timeout, in_order=False)[source]¶ create consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- timeout (int) – time-out
- in_order (bool) – if consume in order, default is False
Returns: CreateConsumerGroupResponse
-
create_dashboard
(project, detail)¶ Create Dashboard. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: CreateEntityResponse
Raise: LogException
-
create_external_store
(project_name, config)[source]¶ create log store Unsuccessful opertaion will cause an LogException.
Parameters: project_name (string) – the Project name :type config : ExternalStoreConfig :param config :external store config
Returns: CreateExternalStoreResponse Raise: LogException
-
create_index
(project_name, logstore_name, index_detail)[source]¶ create index for a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- index_detail (IndexConfig) – the index config detail used to create index
Returns: CreateIndexResponse
Raise: LogException
-
create_logstore
(project_name, logstore_name, ttl=30, shard_count=2, enable_tracking=False, append_meta=False, auto_split=True, max_split_shard=64, preserve_storage=False)[source]¶ create log store Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- ttl (int) – the life cycle of log in the logstore in days, default 30, up to 3650
- shard_count (int) – the shard count of the logstore to create, default 2
- enable_tracking (bool) – enable web tracking, default is False
- append_meta (bool) – allow to append meta info (server received time and IP for external IP to each received log)
- auto_split (bool) – auto split shard, max_split_shard will be 64 by default is True
- max_split_shard (int) – max shard to split, up to 64
- preserve_storage (bool) – if always persist data, TTL will be ignored.
Returns: CreateLogStoreResponse
Raise: LogException
-
create_logtail_config
(project_name, config_detail)[source]¶ create logtail config in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_detail (LogtailConfigGenerator or SeperatorFileConfigDetail or SimpleFileConfigDetail or FullRegFileConfigDetail or JsonFileConfigDetail or ApsaraFileConfigDetail or SyslogConfigDetail or CommonRegLogConfigDetail) – the logtail config detail info, use LogtailConfigGenerator.from_json to generate config: SeperatorFileConfigDetail or SimpleFileConfigDetail or FullRegFileConfigDetail or JsonFileConfigDetail or ApsaraFileConfigDetail or SyslogConfigDetail, Note: CommonRegLogConfigDetail is deprecated.
Returns: CreateLogtailConfigResponse
Raise: LogException
-
create_machine_group
(project_name, group_detail)[source]¶ create machine group in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_detail (MachineGroupDetail) – the machine group detail config
Returns: CreateMachineGroupResponse
Raise: LogException
-
create_project
(project_name, project_des)[source]¶ Create a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- project_des (string) – the description of a project
Returns: CreateProjectResponse
Raise: LogException
-
create_savedsearch
(project, detail)¶ Create Savedsearch. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: CreateEntityResponse
Raise: LogException
-
create_shipper
(project_name, logstore_name, shipper_name, shipper_type, shipper_config)[source]¶ create odps/oss shipper for every type, it only allowed one shipper Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shipper_name (string) – the shipper name
- shipper_type (string) – only support “odps” or “oss”
- shipper_config (OssShipperConfig or OdpsShipperConfig) – the detail shipper config, must be OssShipperConfig or OdpsShipperConfig type
Returns: CreateShipperResponse
Raise: LogException
-
delete_alert
(project, entity)¶ Delete Alert. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – alert name
Returns: DeleteEntityResponse
Raise: LogException
-
delete_consumer_group
(project, logstore, consumer_group)[source]¶ Delete consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
Returns: None
-
delete_dashboard
(project, entity)¶ Delete Dashboard. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – dashboard name
Returns: DeleteEntityResponse
Raise: LogException
-
delete_external_store
(project_name, store_name)[source]¶ delete log store Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- store_name (string) – the external store name
Returns: DeleteExternalStoreResponse
Raise: LogException
-
delete_index
(project_name, logstore_name)[source]¶ delete index of a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: DeleteIndexResponse
Raise: LogException
-
delete_logstore
(project_name, logstore_name)[source]¶ delete log store Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: DeleteLogStoreResponse
Raise: LogException
-
delete_logtail_config
(project_name, config_name)[source]¶ delete logtail config in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name
Returns: DeleteLogtailConfigResponse
Raise: LogException
-
delete_machine_group
(project_name, group_name)[source]¶ delete machine group in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_name (string) – the group name
Returns: DeleteMachineGroupResponse
Raise: LogException
-
delete_project
(project_name)[source]¶ delete project Unsuccessful opertaion will cause an LogException.
Parameters: project_name (string) – the Project name Returns: DeleteProjectResponse Raise: LogException
-
delete_savedsearch
(project, entity)¶ Delete Savedsearch. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – savedsearch name
Returns: DeleteEntityResponse
Raise: LogException
-
delete_shard
(project_name, logstore_name, shardId)[source]¶ delete a readonly shard Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shardId (int) – the read only shard id
Returns: ListShardResponse
Raise: LogException
-
delete_shipper
(project_name, logstore_name, shipper_name)[source]¶ delete odps/oss shipper Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shipper_name (string) – the shipper name
Returns: DeleteShipperResponse
Raise: LogException
-
es_migration
(hosts, project_name, indexes=None, query=None, scroll='5m', logstore_index_mappings=None, pool_size=10, time_reference=None, source=None, topic=None, wait_time_in_secs=60, auto_creation=True)[source]¶ migrate data from elasticsearch to aliyun log service
Parameters: - hosts (string) – a comma-separated list of source ES nodes. e.g. “localhost:9200,other_host:9200”
- project_name (string) – specify the project_name of your log services. e.g. “your_project”
- indexes (string) – a comma-separated list of source index names. e.g. “index1,index2”
- query (string) – used to filter docs, so that you can specify the docs you want to migrate. e.g. ‘{“query”: {“match”: {“title”: “python”}}}’
- scroll (string) – specify how long a consistent view of the index should be maintained for scrolled search. e.g. “5m”
- logstore_index_mappings (string) – specify the mappings of log service logstore and ES index. e.g. ‘{“logstore1”: “my_index*”, “logstore2”: “index1,index2”}, “logstore3”: “index3”}’
- pool_size (int) – specify the size of process pool. e.g. 10
- time_reference (string) – specify what ES doc’s field to use as log’s time field. e.g. “field1”
- source (string) – specify the value of log’s source field. e.g. “your_source”
- topic (string) – specify the value of log’s topic field. e.g. “your_topic”
- wait_time_in_secs (int) – specify the waiting time between initialize aliyun log and executing data migration task. e.g. 60
- auto_creation (bool) – specify whether to let the tool create logstore and index automatically for you. e.g. True
Returns: MigrationResponse
Raise: Exception
-
get_alert
(project, entity)¶ Get Alert. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – alert name
Returns: GetEntityResponse
Raise: LogException
-
get_begin_cursor
(project_name, logstore_name, shard_id)[source]¶ Get begin cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
Returns: GetLogsResponse
Raise: LogException
-
get_check_point
(project, logstore, consumer_group, shard=-1)[source]¶ Get check point
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- shard (int) – shard id
Returns: ConsumerGroupCheckPointResponse
-
get_check_point_fixed
(project, logstore, consumer_group, shard=-1)[source]¶ Get check point
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- shard (int) – shard id
Returns: ConsumerGroupCheckPointResponse
-
get_config_applied_machine_groups
(project_name, config_name)[source]¶ get machine group names where the logtail config applies to Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name used to apply
Returns: GetConfigAppliedMachineGroupsResponse
Raise: LogException
-
get_cursor
(project_name, logstore_name, shard_id, start_time)[source]¶ Get cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- start_time (string/int) – the start time of cursor, e.g 1441093445 or “begin”/”end”, or readable time like “%Y-%m-%d %H:%M:%S CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
Returns: GetCursorResponse
Raise: LogException
-
get_cursor_time
(project_name, logstore_name, shard_id, cursor)[source]¶ Get cursor time from log service Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- cursor (string) – the cursor to get its service receive time
Returns: GetCursorTimeResponse
Raise: LogException
-
get_dashboard
(project, entity)¶ Get Dashboard. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – dashboard name
Returns: GetEntityResponse
Raise: LogException
-
get_end_cursor
(project_name, logstore_name, shard_id)[source]¶ Get end cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
Returns: GetLogsResponse
Raise: LogException
-
get_external_store
(project_name, store_name)[source]¶ get the logstore meta info Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- store_name (string) – the logstore name
Returns: GetLogStoreResponse
Raise: LogException
-
get_histograms
(request)[source]¶ Get histograms of requested query from log service. Unsuccessful opertaion will cause an LogException.
Parameters: request (GetHistogramsRequest) – the GetHistograms request parameters class. Returns: GetHistogramsResponse Raise: LogException
-
get_index_config
(project_name, logstore_name)[source]¶ get index config detail of a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: GetIndexResponse
Raise: LogException
-
get_log
(project, logstore, from_time, to_time, topic=None, query=None, reverse=False, offset=0, size=100)[source]¶ Get logs from log service. will retry when incomplete. Unsuccessful opertaion will cause an LogException. Note: for larger volume of data (e.g. > 1 million logs), use get_log_all
Parameters: - project (string) – project name
- logstore (string) – logstore name
- from_time (int/string) – the begin timestamp or format of time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (int/string) – the end timestamp or format of time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- topic (string) – topic name of logs, could be None
- query (string) – user defined query, could be None
- reverse (bool) – if reverse is set to true, the query will return the latest logs first, default is false
- offset (int) – line offset of return logs
- size (int) – max line number of return logs, -1 means get all
Returns: GetLogsResponse
Raise: LogException
-
get_log_all
(project, logstore, from_time, to_time, topic=None, query=None, reverse=False, offset=0)[source]¶ Get logs from log service. will retry when incomplete. Unsuccessful opertaion will cause an LogException. different with get_log with size=-1, It will try to iteratively fetch all data every 100 items and yield them, in CLI, it could apply jmes filter to each batch and make it possible to fetch larger volume of data.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- from_time (int/string) – the begin timestamp or format of time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (int/string) – the end timestamp or format of time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- topic (string) – topic name of logs, could be None
- query (string) – user defined query, could be None
- reverse (bool) – if reverse is set to true, the query will return the latest logs first, default is false
- offset (int) – offset to start, by default is 0
Returns: GetLogsResponse iterator
Raise: LogException
-
get_logs
(request)[source]¶ Get logs from log service. Unsuccessful opertaion will cause an LogException. Note: for larger volume of data (e.g. > 1 million logs), use get_log_all
Parameters: request (GetLogsRequest) – the GetLogs request parameters class. Returns: GetLogsResponse Raise: LogException
-
get_logstore
(project_name, logstore_name)[source]¶ get the logstore meta info Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: GetLogStoreResponse
Raise: LogException
-
get_logtail_config
(project_name, config_name)[source]¶ get logtail config in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name
Returns: GetLogtailConfigResponse
Raise: LogException
-
get_machine_group
(project_name, group_name)[source]¶ get machine group in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_name (string) – the group name to get
Returns: GetMachineGroupResponse
Raise: LogException
-
get_machine_group_applied_configs
(project_name, group_name)[source]¶ get the logtail config names applied in a machine group Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_name (string) – the group name list
Returns: GetMachineGroupAppliedConfigResponse
Raise: LogException
-
get_previous_cursor_time
(project_name, logstore_name, shard_id, cursor, normalize=True)[source]¶ Get previous cursor time from log service. Note: normalize = true: if the cursor is out of range, it will be nornalized to nearest cursor Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- cursor (string) – the cursor to get its service receive time
- normalize (bool) – fix the cursor or not if it’s out of scope
Returns: GetCursorTimeResponse
Raise: LogException
-
get_project
(project_name)[source]¶ get project Unsuccessful opertaion will cause an LogException.
Parameters: project_name (string) – the Project name Returns: GetProjectResponse Raise: LogException
-
get_project_logs
(request)[source]¶ Get logs from log service. Unsuccessful opertaion will cause an LogException.
Parameters: request (GetProjectLogsRequest) – the GetProjectLogs request parameters class. Returns: GetLogsResponse Raise: LogException
-
get_resource_usage
(project)[source]¶ get resource usage ist the project Unsuccessful opertaion will cause an LogException.
Parameters: client (string) – project name Returns: dict Raise: LogException
-
get_savedsearch
(project, entity)¶ Get Savedsearch. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – savedsearch name
Returns: GetEntityResponse
Raise: LogException
-
get_shipper_config
(project_name, logstore_name, shipper_name)[source]¶ get odps/oss shipper Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shipper_name (string) – the shipper name
Returns: GetShipperConfigResponse
Raise: LogException
-
get_shipper_tasks
(project_name, logstore_name, shipper_name, start_time, end_time, status_type='', offset=0, size=100)[source]¶ get odps/oss shipper tasks in a certain time range Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shipper_name (string) – the shipper name
- start_time (int) – the start timestamp
- end_time (int) – the end timestamp
- status_type (string) – support one of [‘’, ‘fail’, ‘success’, ‘running’] , if the status_type = ‘’ , return all kinds of status type
- offset (int) – the begin task offset, -1 means all
- size (int) – the needed tasks count
Returns: GetShipperTasksResponse
Raise: LogException
-
heart_beat
(project, logstore, consumer_group, consumer, shards=None)[source]¶ Heatbeat consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- consumer (string) – consumer name
- shards (int list) – shard id list e.g. [0,1,2]
Returns: None
-
list_alert
(project, offset=0, size=100)¶ list the Alert, get first 100 items by default Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – the Project name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_consumer_group
(project, logstore)[source]¶ List consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
Returns: ListConsumerGroupResponse
-
list_dashboard
(project, offset=0, size=100)¶ list the Dashboard, get first 100 items by default Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – the Project name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_external_store
(project_name, external_store_name_pattern=None, offset=0, size=100)[source]¶ list the logstore in a projectListLogStoreResponse Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name_pattern (string) – the sub name logstore, used for the server to return logstore names contain this sub name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_logstore
(project_name, logstore_name_pattern=None, offset=0, size=100)[source]¶ list the logstore in a projectListLogStoreResponse Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name_pattern (string) – the sub name logstore, used for the server to return logstore names contain this sub name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_logstore_acl
(project_name, logstore_name, offset=0, size=100)[source]¶ list acl of a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- offset (int) – the offset of all acl
- size (int) – the max return acl count
Returns: ListAclResponse
Raise: LogException
-
list_logstores
(request)[source]¶ List all logstores of requested project. Unsuccessful opertaion will cause an LogException.
Parameters: request (ListLogstoresRequest) – the ListLogstores request parameters class. Returns: ListLogStoresResponse Raise: LogException
-
list_logtail_config
(project_name, offset=0, size=100)[source]¶ list logtail config name in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- offset (int) – the offset of all config names
- size (int) – the max return names count, -1 means all
Returns: ListLogtailConfigResponse
Raise: LogException
-
list_machine_group
(project_name, offset=0, size=100)[source]¶ list machine group names in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- offset (int) – the offset of all group name
- size (int) – the max return names count, -1 means all
Returns: ListMachineGroupResponse
Raise: LogException
-
list_machines
(project_name, group_name, offset=0, size=100)[source]¶ list machines in a machine group Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_name (string) – the group name to list
- offset (int) – the offset of all group name
- size (int) – the max return names count, -1 means all
Returns: ListMachinesResponse
Raise: LogException
-
list_project
(offset=0, size=100)[source]¶ list the project Unsuccessful opertaion will cause an LogException.
Parameters: - offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means return all data
Returns: ListProjectResponse
Raise: LogException
-
list_project_acl
(project_name, offset=0, size=100)[source]¶ list acl of a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- offset (int) – the offset of all acl
- size (int) – the max return acl count
Returns: ListAclResponse
Raise: LogException
-
list_savedsearch
(project, offset=0, size=100)¶ list the Savedsearch, get first 100 items by default Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – the Project name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_shards
(project_name, logstore_name)[source]¶ list the shard meta of a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: ListShardResponse
Raise: LogException
-
list_shipper
(project_name, logstore_name)[source]¶ list odps/oss shipper Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: ListShipperResponse
Raise: LogException
-
list_topics
(request)[source]¶ List all topics in a logstore. Unsuccessful opertaion will cause an LogException.
Parameters: request (ListTopicsRequest) – the ListTopics request parameters class. Returns: ListTopicsResponse Raise: LogException
-
merge_shard
(project_name, logstore_name, shardId)[source]¶ split two adjacent readwrite hards into one shards Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shardId (int) – the shard id of the left shard, server will determine the right adjacent shardId
Returns: ListShardResponse
Raise: LogException
-
pull_log
(project_name, logstore_name, shard_id, from_time, to_time, batch_size=1000, compress=True)[source]¶ batch pull log data from log service using time-range Unsuccessful opertaion will cause an LogException. the time parameter means the time when server receives the logs
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- from_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- batch_size (int) – batch size to fetch the data in each iteration. by default it’s 1000
- compress (bool) – if use compression, by default it’s True
Returns: PullLogResponse
Raise: LogException
-
pull_log_dump
(project_name, logstore_name, from_time, to_time, file_path, batch_size=500, compress=True, encodings=None)[source]¶ dump all logs seperatedly line into file_path, file_path, the time parameters are log received time on server side.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- from_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- file_path (string) – file path with {} for shard id. e.g. “/data/dump_{}.data”, {} will be replaced with each partition.
- batch_size (int) – batch size to fetch the data in each iteration. by default it’s 500
- compress (bool) – if use compression, by default it’s True
- encodings (string list) – encoding like [“utf8”, “latin1”] etc to dumps the logs in json format to file. default is [“utf8”,]
Returns: LogResponse {“total_count”: 30, “files”: {‘file_path_1’: 10, “file_path_2”: 20} })
Raise: LogException
-
pull_logs
(project_name, logstore_name, shard_id, cursor, count=1000, end_cursor=None, compress=True)[source]¶ batch pull log data from log service Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- cursor (string) – the start to cursor to get data
- count (int) – the required pull log package count, default 1000 packages
- end_cursor (string) – the end cursor position to get data
- compress (boolean) – if use zip compress for transfer data
Returns: PullLogResponse
Raise: LogException
-
put_log_raw
(project, logstore, log_group, compress=None)[source]¶ Put logs to log service. using raw data in protobuf
Parameters: - project (string) – the Project name
- logstore (string) – the logstore name
- log_group (LogGroup) – log group structure
- compress (boolean) – compress or not, by default is True
Returns: PutLogsResponse
Raise: LogException
-
put_logs
(request)[source]¶ Put logs to log service. up to 512000 logs up to 10MB size Unsuccessful opertaion will cause an LogException.
Parameters: request (PutLogsRequest) – the PutLogs request parameters class Returns: PutLogsResponse Raise: LogException
-
remove_config_to_machine_group
(project_name, config_name, group_name)[source]¶ remove a logtail config to a machine group Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name to apply
- group_name (string) – the machine group name
Returns: RemoveConfigToMachineGroupResponse
Raise: LogException
-
retry_shipper_tasks
(project_name, logstore_name, shipper_name, task_list)[source]¶ retry failed tasks , only the failed task can be retried Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shipper_name (string) – the shipper name
- task_list (string array) – the failed task_id list, e.g [‘failed_task_id_1’, ‘failed_task_id_2’,…], currently the max retry task count 10 every time
Returns: RetryShipperTasksResponse
Raise: LogException
-
set_source
(source)[source]¶ Set the source of the log client
Parameters: source (string) – new source Returns: None
-
set_user_agent
(user_agent)[source]¶ set user agent
Parameters: user_agent (string) – user agent Returns: None
-
split_shard
(project_name, logstore_name, shardId, split_hash)[source]¶ split a readwrite shard into two shards Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shardId (int) – the shard id
- split_hash (string) – the internal hash between the shard begin and end hash
Returns: ListShardResponse
Raise: LogException
-
transform_data
(project, logstore, config, from_time, to_time=None, to_client=None, to_project=None, to_logstore=None, shard_list=None, batch_size=500, compress=True, cg_name=None, c_name=None, cg_heartbeat_interval=None, cg_data_fetch_interval=None, cg_in_order=None, cg_worker_pool_size=None)[source]¶ transform data from one logstore to another one (could be the same or in different region), the time passed is log received time on server side. There’re two mode, batch mode / consumer group mode. For Batch mode, just leave the cg_name and later options as None.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- config (string) – transform config imported or path of config (in python)
- from_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (string/int) – curosr value, leave it as None if consumer group is configured. could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10 CST”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_client (LogClient) – logclient instance, if empty will use source client
- to_project (string) – project name, if empty will use source project
- to_logstore (string) – logstore name, if empty will use source logstore
- shard_list (string) – shard number list. could be comma seperated list or range: 1,20,31-40
- batch_size (int) – batch size to fetch the data in each iteration. by default it’s 500
- compress (bool) – if use compression, by default it’s True
- cg_name (string) – consumer group name. must configure if it’s consumer group mode.
- c_name (string) – consumer group name for consumer group mode, default: CLI-transform-data-${process_id}
- cg_heartbeat_interval (int) – cg_heartbeat_interval, default 20
- cg_data_fetch_interval (int) – cg_data_fetch_interval, default 2
- cg_in_order (bool) – cg_in_order, default False
- cg_worker_pool_size (int) – cg_worker_pool_size, default 2
Returns: LogResponse {“total_count”: 30, “shards”: {0: {“count”: 10, “removed”: 1}, 2: {“count”: 20, “removed”: 1}} })
-
update_alert
(project, detail)¶ Update Alert. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: UpdateEntityResponse
Raise: LogException
-
update_check_point
(project, logstore, consumer_group, shard, check_point, consumer='', force_success=True)[source]¶ Update check point
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- shard (int) – shard id
- check_point (string) – checkpoint name
- consumer (string) – consumer name
- force_success (bool) – if force to succeed
Returns: None
-
update_consumer_group
(project, logstore, consumer_group, timeout=None, in_order=None)[source]¶ Update consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- timeout (int) – timeout
- in_order (bool) – order
Returns: None
-
update_dashboard
(project, detail)¶ Update Dashboard. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: UpdateEntityResponse
Raise: LogException
-
update_external_store
(project_name, config)[source]¶ update the logstore meta info Unsuccessful opertaion will cause an LogException.
:param config : external store config
Returns: UpdateExternalStoreResponse Raise: LogException
-
update_index
(project_name, logstore_name, index_detail)[source]¶ update index for a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- index_detail (IndexConfig) – the index config detail used to update index
Returns: UpdateIndexResponse
Raise: LogException
-
update_logstore
(project_name, logstore_name, ttl=None, enable_tracking=None, shard_count=None, append_meta=None, auto_split=None, max_split_shard=None, preserve_storage=None)[source]¶ update the logstore meta info Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- ttl (int) – the life cycle of log in the logstore in days
- enable_tracking (bool) – enable web tracking
- shard_count (int) – deprecated, the shard count could only be updated by split & merge
- append_meta (bool) – allow to append meta info (server received time and IP for external IP to each received log)
- auto_split (bool) – auto split shard, max_split_shard will be 64 by default is True
- max_split_shard (int) – max shard to split, up to 64
- preserve_storage (bool) – if always persist data, TTL will be ignored.
Returns: UpdateLogStoreResponse
Raise: LogException
-
update_logstore_acl
(project_name, logstore_name, acl_action, acl_config)[source]¶ update acl of a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- acl_action (string) – “grant” or “revoke”, grant or revoke the acl_config to/from a logstore
- acl_config (acl_config.AclConfig) – the detail acl config info
Returns: UpdateAclResponse
Raise: LogException
-
update_logtail_config
(project_name, config_detail)[source]¶ update logtail config in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_detail (LogtailConfigGenerator or SeperatorFileConfigDetail or SimpleFileConfigDetail or FullRegFileConfigDetail or JsonFileConfigDetail or ApsaraFileConfigDetail or SyslogConfigDetail or CommonRegLogConfigDetail) – the logtail config detail info, use LogtailConfigGenerator.from_json to generate config: SeperatorFileConfigDetail or SimpleFileConfigDetail or FullRegFileConfigDetail or JsonFileConfigDetail or ApsaraFileConfigDetail or SyslogConfigDetail
Returns: UpdateLogtailConfigResponse
Raise: LogException
-
update_machine_group
(project_name, group_detail)[source]¶ update machine group in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_detail (MachineGroupDetail) – the machine group detail config
Returns: UpdateMachineGroupResponse
Raise: LogException
-
update_project_acl
(project_name, acl_action, acl_config)[source]¶ update acl of a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- acl_action (string) – “grant” or “revoke”, grant or revoke the acl_config to/from a project
- acl_config (acl_config.AclConfig) – the detail acl config info
Returns: UpdateAclResponse
Raise: LogException
-
update_savedsearch
(project, detail)¶ Update Savedsearch. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: UpdateEntityResponse
Raise: LogException
-
update_shipper
(project_name, logstore_name, shipper_name, shipper_type, shipper_config)[source]¶ update odps/oss shipper for every type, it only allowed one shipper Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shipper_name (string) – the shipper name
- shipper_type (string) – only support “odps” or “oss” , the type must be same with the oringal shipper
- shipper_config (OssShipperConfig or OdpsShipperConfig) – the detail shipper config, must be OssShipperConfig or OdpsShipperConfig type
Returns: UpdateShipperResponse
Raise: LogException
-
class
aliyun.log.
LogException
(errorCode, errorMessage, requestId='', resp_status=200, resp_header='', resp_body='')[source]¶ The Exception of the log request & response.
Parameters: - errorCode (string) – log service error code
- errorMessage (string) – detailed information for the exception
- requestId (string) – the request id of the response, ‘’ is set if client error
-
class
aliyun.log.
GetHistogramsRequest
(project=None, logstore=None, fromTime=None, toTime=None, topic=None, query=None)[source]¶ The request used to get histograms of a query from log.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- fromTime (int/string) – the begin time or format of time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- toTime (int/string) – the end time or format of time in readable time like “%Y-%m-%d %H:%M:%S CST” e.g. “2018-01-02 12:12:10”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- topic (string) – topic name of logs
- query (string) – user defined query
-
class
aliyun.log.
GetLogsRequest
(project=None, logstore=None, fromTime=None, toTime=None, topic=None, query=None, line=100, offset=0, reverse=False)[source]¶ The request used to get logs by a query from log.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- fromTime (int/string) – the begin time, or format of time in format “%Y-%m-%d %H:%M:%S” e.g. “2018-01-02 12:12:10”
- toTime (int/string) – the end time, or format of time in format “%Y-%m-%d %H:%M:%S” e.g. “2018-01-02 12:12:10”
- topic (string) – topic name of logs
- query (string) – user defined query
- line (int) – max line number of return logs
- offset (int) – line offset of return logs
- reverse (bool) – if reverse is set to true, the query will return the latest logs first
-
class
aliyun.log.
GetProjectLogsRequest
(project=None, query=None)[source]¶ The request used to get logs by a query from log cross multiple logstores.
Parameters: - project (string) – project name
- query (string) – user defined query
-
class
aliyun.log.
IndexConfig
(ttl=1, line_config=None, key_config_list=None, all_keys_config=None, log_reduce=None)[source]¶ The index config of a logstore
Parameters: - ttl (int) – this parameter is deprecated, the ttl is same as logstore’s ttl
- line_config (IndexLineConfig) – the index config of the whole log line
- key_config_list (dict) – dict (string => IndexKeyConfig), the index key configs of the keys
- all_keys_config (IndexKeyConfig) – the key config of all keys, the new create logstore should never user this param, it only used to compatible with old config
- log_reduce (bool) – if to enable logreduce
-
class
aliyun.log.
ListTopicsRequest
(project=None, logstore=None, token=None, line=None)[source]¶ The request used to get topics of a query from log.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- token (string) – the start token to list topics
- line (int) – max topic counts to return
-
class
aliyun.log.
ListLogstoresRequest
(project=None)[source]¶ The request used to list log store from log.
Parameters: project (string) – project name
-
class
aliyun.log.
PluginConfigDetail
(logstoreName, configName, plugin, **extended_items)[source]¶ The logtail config for simple mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
SeperatorFileConfigDetail
(logstoreName, configName, logPath, filePattern, logSample, separator, key, timeKey='', timeFormat=None, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, createTime=None, modifyTime=None, **extended_items)[source]¶ The logtail config for separator mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- logSample (string) – log sample. e.g. shanghai|2000|east
- separator (string) – ‘ ‘ for tab, ‘ ‘ for space, ‘|’, up to 3 chars like “&&&” or “||” etc.
- key (string list) – keys to map the fields like [“city”, “population”, “location”]
- timeKey (string) – one key name in key to set the time or set it None to use system time.
- timeFormat (string) – whe timeKey is not None, set its format, refer to https://help.aliyun.com/document_detail/28980.html?spm=5176.2020520112.113.4.2243b18eHkxdNB
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
SimpleFileConfigDetail
(logstoreName, configName, logPath, filePattern, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, **extended_items)[source]¶ The logtail config for simple mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
FullRegFileConfigDetail
(logstoreName, configName, logPath, filePattern, logSample, logBeginRegex=None, regex=None, key=None, timeFormat=None, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, **extended_items)[source]¶ The logtail config for full regex mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- logSample (string) – log sample. e.g. shanghai|2000|east
- logBeginRegex (string) – regex to match line, None means ‘.*’, just single line mode.
- regex (string) – regex to extract fields form log. None means (.*), just capture whole line
- key (string list) – keys to map the fields like [“city”, “population”, “location”]. None means [“content”]
- timeFormat (string) – whe timeKey is not None, set its format, refer to https://help.aliyun.com/document_detail/28980.html?spm=5176.2020520112.113.4.2243b18eHkxdNB
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
JsonFileConfigDetail
(logstoreName, configName, logPath, filePattern, timeKey='', timeFormat=None, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, createTime=None, modifyTime=None, **extended_items)[source]¶ The logtail config for json mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- timeKey (string) – one key name in key to set the time or set it None to use system time.
- timeFormat (string) – whe timeKey is not None, set its format, refer to https://help.aliyun.com/document_detail/28980.html?spm=5176.2020520112.113.4.2243b18eHkxdNB
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
ApsaraFileConfigDetail
(logstoreName, configName, logPath, filePattern, logBeginRegex, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, createTime=None, modifyTime=None, **extended_items)[source]¶ The logtail config for Apsara mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- logBeginRegex (string) – regex to match line, None means ‘.*’, just single line mode.
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
SyslogConfigDetail
(logstoreName, configName, tag, localStorage=None, createTime=None, modifyTime=None, **extended_items)[source]¶ The logtail config for syslog mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- tag (string) – tag for the log captured
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
MachineGroupDetail
(group_name=None, machine_type=None, machine_list=None, group_type='', group_attribute=None)[source]¶ The machine group detail info
Parameters: - group_name (string) – group name
- machine_type (string) – “ip” or “userdefined”
- machine_list (string list) – the list of machine ips or machine userdefined, e.g [“127.0.0.1”, “127.0.0.2”]
- group_type (string) – the machine group type, “” or “Armory”
- group_attribute (dict) – the attributes in group, it contains two optional key : 1. “externalName”: only used if the group_type is “Armory”, its the Armory name 2. “groupTopic”: group topic value
-
class
aliyun.log.
PutLogsRequest
(project=None, logstore=None, topic=None, source=None, logitems=None, hashKey=None, compress=True, logtags=None)[source]¶ The request used to send data to log.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- topic (string) – topic name
- source (string) – source of the logs
- logitems (list<LogItem>) – log data
- hashKey (String) – put data with set hash, the data will be send to shard whose range contains the hashKey
- compress (bool) – if need to compress the logs
- logtags (list) – list of key:value tag pair , [(tag_key_1,tag_value_1) , (tag_key_2,tag_value_2)]
-
class
aliyun.log.
OssShipperConfig
(oss_bucket, oss_prefix, oss_role_arn, buffer_interval=300, buffer_mb=128, compress_type='snappy')[source]¶ A oss ship config
Parameters: - oss_bucket (string) – the oss bucket name
- oss_prefix (string) – the the prefix path where to save the log
- oss_role_arn (string) – the ram arn used to get the temporary write permission to the oss bucket
- buffer_interval (int) – the time(seconds) to buffer before save to oss
- buffer_mb (int) – the data size(MB) to buffer before save to oss
- compress_type (string) – the compress type, only support ‘snappy’ or ‘none’
-
class
aliyun.log.
OdpsShipperConfig
(odps_endpoint, odps_project, odps_table, log_fields_list, partition_column, partition_time_format, bufferInterval=1800)[source]¶ Odps shipper config
Parameters: - odps_endpoint (string) – the odps endpoint
- odps_project (string) – the odps project name
- odps_table (string) – the odps table name
- log_fields_list (string array) – the log field(keys in log) list mapping to the odps table column. e.g log_fields_list=[‘__time__’, ‘key_a’, ‘key_b’], the $log_time, $log_key_a, $log_key_b will mapping to odps table column No.1, No.2, No.3
- partition_column (string array) – the log fields mapping to odps table partition column
- partition_time_format (string) – the time format of __partition_time__, e.g yyyy_MM_dd_HH_mm
-
class
aliyun.log.
ShipperTask
(task_id, task_status, task_message, task_create_time, task_last_data_receive_time, task_finish_time)[source]¶ A shipper task
Parameters: - task_id (string) – the task id
- task_status (string) – one of [‘success’, ‘running’, ‘fail’]
- task_message (string) – the error message of task_status is ‘fail’
- task_create_time (int) – the task create time (timestamp from 1970.1.1)
- task_last_data_receive_time (int) – last log data receive time (timestamp)
- task_finish_time (int) – the task finish time (timestamp)
-
class
aliyun.log.es_migration.
MigrationManager
(hosts=None, indexes=None, query=None, scroll='5m', endpoint=None, project_name=None, access_key_id=None, access_key=None, logstore_index_mappings=None, pool_size=10, time_reference=None, source=None, topic=None, wait_time_in_secs=60, auto_creation=True)[source]¶ MigrationManager, migrate data from elasticsearch to aliyun log service
Parameters: - hosts (string) – a comma-separated list of source ES nodes. e.g. “localhost:9200,other_host:9200”
- indexes (string) – a comma-separated list of source index names. e.g. “index1,index2”
- query (string) – used to filter docs, so that you can specify the docs you want to migrate. e.g. ‘{“query”: {“match”: {“title”: “python”}}}’
- scroll (string) – specify how long a consistent view of the index should be maintained for scrolled search. e.g. “5m”
- endpoint (string) – specify the endpoint of your log services. e.g. “cn-beijing.log.aliyuncs.com”
- project_name (string) – specify the project_name of your log services. e.g. “your_project”
- access_key_id (string) – specify the access_key_id of your account.
- access_key (string) – specify the access_key of your account.
- logstore_index_mappings (string) – specify the mappings of log service logstore and ES index. e.g. ‘{“logstore1”: “my_index*”, “logstore2”: “index1,index2”}, “logstore3”: “index3”}’
- pool_size (int) – specify the size of process pool. e.g. 10
- time_reference (string) – specify what ES doc’s field to use as log’s time field. e.g. “field1”
- source (string) – specify the value of log’s source field. e.g. “your_source”
- topic (string) – specify the value of log’s topic field. e.g. “your_topic”
- wait_time_in_secs (int) – specify the waiting time between initialize aliyun log and executing data migration task. e.g. 60
- auto_creation (bool) – specify whether to let the tool create logstore and index automatically for you. e.g. True