I am trying to use ISM with opendistro on AWS Elasticsearch Service v7.8. I am trying to set up a basic rollover/delete policy and it seems that the policy is not triggering. I assume I am doing something wrong, but I can't seem to track it down... I am testing it out using a rollover every hour and a delete after 6 hours...
Here is my ISM policy which I have so appropriately named, "test":
{
"policy": {
"policy_id": "test",
"description": "A test policy",
"last_updated_time": 1605196195481,
"schema_version": 1,
"error_notification": null,
"default_state": "active",
"states": [{
"name": "active",
"actions": [{
"rollover": {
"min_index_age": "1h"
}
}],
"transitions": [{
"state_name": "delete",
"conditions": {
"min_index_age": "6h"
}
}]
},
{
"name": "delete",
"actions": [{
"delete": {}
}],
"transitions": []
}
]
}
}
I've create a template to maintain the index creation etc. Here is the template. Notice I am adding the rollover alias "atest" but the policy_id is "test". I do not add the index to any alias in this template:
PUT /_template/atest
{
"index_patterns" : [
"atest-*"
],
"settings" : {
"index" : {
"opendistro" : {
"index_state_management" : {
"policy_id" : "test",
"rollover_alias" : "atest"
}
},
"analysis" : {
}
}
},
"mappings" : {
},
"aliases" : { }
}
I then create an index using the index pattern from template, adding it to what I have defined as the rollover-alias above:
PUT /atest-000001
{
"aliases": {"atest": {}}
}
Then I can see the doc in the opendistro-ism-config index:
{
"_index": ".opendistro-ism-config",
"_type": "_doc",
"_id": "T_k8jMI5RvuWRaLp1tY_hg",
"_version": 2,
"_score": null,
"_source": {
"managed_index": {
"name": "atest-000001",
"enabled": true,
"index": "atest-000001",
"index_uuid": "T_k8jMI5RvuWRaLp1tY_hg",
"schedule": {
"interval": {
"start_time": 1605200587242,
"period": 30,
"unit": "Minutes"
}
},
"last_updated_time": 1605200587242,
"enabled_time": 1605200587242,
"policy_id": "test",
"policy_seq_no": 422,
"policy_primary_term": 111,
"policy": {
"policy_id": "test",
"description": "A test policy",
"last_updated_time": 1605196195481,
"schema_version": 1,
"error_notification": null,
"default_state": "active",
"states": [
{
"name": "active",
"actions": [
{
"rollover": {
"min_index_age": "1h"
}
}
],
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "6h"
}
}
]
},
{
"name": "delete",
"actions": [
{
"delete": {}
}
],
"transitions": []
}
]
},
"change_policy": null
}
},
"fields": {
"managed_index.last_updated_time": [
"2020-11-12T17:03:07.242Z"
],
"policy.last_updated_time": [],
"policy.states.actions.notification.destination.last_update_time": [],
"policy.error_notification.destination.last_update_time": [],
"managed_index.schedule.interval.start_time": [
"2020-11-12T17:03:07.242Z"
],
"managed_index.enabled_time": [
"2020-11-12T17:03:07.242Z"
]
},
"sort": [
1605200587242
]
}
At some point I see the managed index info go from "initializing" to
{
"message": "Successfully initialized policy: test"
}
At this point, nothing happens. The row for "atest-000001" in the ISM console in kibana says the "state" is "active", the "action" is "-" and the "Job Status" is "Running". It will remain like this for days... I have also tried:
PUT _cluster/settings
{
"persistent": {
"opendistro.index_state_management.enabled" : true
}
}
Still nothing triggers. What am I doing wrong?
Turns out it was an internal AWS ES thing. Updating to R20201117 resolved the issue.