I have a two nodes cassandra setup on ec2 boxes with my system_auth keyspace as

> CREATE KEYSPACE system_auth WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '2'}  AND durable_writes = true;

I have added both the nodes in seeds with endpoint_snitch: SimpleSnitch

Tested same with nodetool

nodetool -h ::FFFF:127.0.0.1 status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
**--  Address        Load        Tokens  Owns (effective)  Host ID     Rack**
UN  10.%.%.%  532.7 KiB   16      54.1%             4dd0ab16-f60602  rack1
UN  10.%.%.%  416.37 KiB  16      45.9%             d1395269-0270e3  rack1

Tested with login using default creds also cqlsh -u cassandra -p cassandra

cqlsh -u cassandra -p cassandra
Connected to Test Cluster at 127.0.0.1:9042
[cqlsh 6.0.0 | Cassandra 4.0.2 | CQL spec 3.4.5 | Native protocol v5]
Use HELP for help.
cassandra@cqlsh>

Everything was working until we tested node down once we stopped the service on one node we were able login using secondary users created but unable to login using default creds cqlsh -u cassandra -p cassandra

Tt throws error

Error from server: code=0100 [Bad credentials] message="Unable to perform authentication: Cannot achieve consistency level QUORUM"')})

I am unable to understand how do i fix it current consistency is one

`cassandra@cqlsh> consistency
Current consistency level is ONE.`

While debugging the issue it looks around consistency level tried changing the system_auth class to networktoplogy from simple but it does not helped

1

There are 1 answers

0
Mário Tavares On

Edit: I discovered that my previous answer is not valid for the Cassandra version you're using, through some digging - see this StackExchange post for more details.

The solution: To resolve authentication consistency, you need to set auth_read_consistency_level to LOCAL_ONE, in cassandra.yaml and restart nodes to apply.


Original answer:

You get the authentication error because Cassandra queries the system_auth tables differently for the default "cassandra" superuser.

Precisely the consistency for authentication is:

  • QUORUM - for superuser "cassandra"
  • LOCAL_ONE - for any other user

You can't tune the authentication consistency in CQL.

Since QUORUM with replication factor 2, requires both replicas to be available, you shouldn't be able to authenticate as "cassandra" whenever one of the replicas is down, whereas you can authenticate with any other user if one replica is up.

More info at the official documentation:

During login, the credentials for the default superuser are read with a consistency level of QUORUM, whereas those for all other users (including superusers) are read at LOCAL_ONE. In the interests of performance and availability, as well as security, operators should create another superuser and disable the default one. This step is optional, but highly recommended. While logged in as the default superuser, create another superuser role which can be used to bootstrap further configuration.

I endorse the official recommendation to create new superuser to use instead of "cassandra".