Tinkoff Gatling JDBC - false response time when enable log trace

97 views Asked by At

currently I'm looking in this Tinkoff's JDBC plugin for Gatling, when I enable TRACE log, the response time would be much higher, i.e from 1500ms to 25 seconds.

This is my Gatling logback.xml setting:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{15} - %msg%n%rEx</pattern>
        </encoder>
        <immediateFlush>false</immediateFlush>
    </appender>

    <!-- uncomment and set to DEBUG to log all failing HTTP requests -->
    <!-- uncomment and set to TRACE to log all HTTP requests -->
    <logger name="io.gatling.http.engine.response" level="TRACE" />
    <!-- uncomment to log WebSocket events -->
    <!-- <logger name="io.gatling.http.action.ws.fsm" level="TRACE" /> -->

    <!-- uncomment to log SSE events -->
    <!-- <logger name="io.gatling.http.action.sse.fsm" level="TRACE" /> -->
    <!-- <root level="TRACE"> -->
    <root level="TRACE">
        <appender-ref ref="CONSOLE" />
    </root>

</configuration>

For connecting definition

object Actions_test {

    // database connection
    val database_URL: JdbcProtocolBuilder = DB
            .url("mariadb")
            .username("user")
            .password("pw@2024")
            .maximumPoolSize(10)
            .connectionTimeout(10.second)

    // large query
    def callingJDBC(): QueryActionBuilder = 
      jdbc("simple Call")
        .query("""your-large-query-here
               """.stripMargin)
        .check(simpleCheck(x => x.length > 2000)
      )        
}

for Scenario:

class DebugTest extends Simulation {

  val scn = scenario("test")
    .exec(Actions_test.callingJDBC())
    .exitHere

  setUp(
    scn.inject(
      rampUsers(1000).during(500)
    ),
  )
  .protocols(Actions_test.database_URL)
  .maxDuration(10.minute)

}

How do I correctly configure logging for this plugin? please help me.

1

There are 1 answers

1
Gastón Schabas On BEST ANSWER

TL;DR

Setting log level to trace si not something that you are going to do frequently unless you really need to see lots of details of the program you are executing. Setting the root logger level of your app to TRACE could not be a good idea because you are going to see things from many many components (http client, db client, gatling, akka, etc). Something better could be to set the root logger level to WARN or INFO and set to TRACE some specific loggers. Just as it is explained in the gatling logback dummy config

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{15} - %msg%n%rEx</pattern>
        </encoder>
        <immediateFlush>false</immediateFlush>
    </appender>

    <!-- uncomment and set to DEBUG to log all failing HTTP requests -->
    <!-- uncomment and set to TRACE to log all HTTP requests -->
    <!--<logger name="io.gatling.http.engine.response" level="TRACE" />-->

    <!-- uncomment to log WebSocket events -->
    <!--<logger name="io.gatling.http.action.ws.fsm" level="DEBUG" />-->

    <!-- uncomment to log SSE events -->
    <!--<logger name="io.gatling.http.action.sse.fsm" level="DEBUG" />-->

    <root level="WARN">
        <appender-ref ref="CONSOLE" />
    </root>

</configuration>

Not sure what is your goal exactly or why you want to have a good performance with a TRACE level. Logback has appenders, in this case you are using the ConsoleAppender.

The ConsoleAppender, as the name indicates, appends on the console, or more precisely on System.out or System.err, the former being the default target. Both System.out and System.err are of type java.io.PrintStream. Consequently, they are wrapped inside an OutputStreamWriter which buffers I/O operations.

This means that the logger is doing an I/O operation each time you log a message. You can set the property immediateFlush to false (by default is true) to buffer the messages. This one comes from OutputStreamAppender

Property Name Type Description
immediateFlush boolean The default value for immediateFlush is 'true'. Immediate flushing of the output stream ensures that logging events are immediately written out and will not be lost in case your application exits without properly closing appenders. On the other hand, setting this property to 'false' is likely to quadruple (your mileage may vary) logging throughput. Again, if immediateFlush is set to 'false' and if appenders are not closed properly when your application exits, then logging events not yet written to disk may be lost.

When you change from log level INFO to TRACE, you are increasing the amount of messages that the logger have to write. So, it's ok to see the performance reduced.

From the Apache commons loggins library, you can read the following best practices for log level messages

Message Priorities/Levels

It is important to ensure that log message are appropriate in content and severity. The following guidelines are suggested:

  • fatal - Severe errors that cause premature termination. Expect these to be immediately visible on a status console.
  • error - Other runtime errors or unexpected conditions. Expect these to be immediately visible on a status console.
  • warn - Use of deprecated APIs, poor use of API, 'almost' errors, other runtime situations that are undesirable or unexpected, but not necessarily "wrong". Expect these to be immediately visible on a status console.
  • info - Interesting runtime events (startup/shutdown). Expect these to be immediately visible on a console, so be conservative and keep to a minimum.
  • debug - detailed information on the flow through the system. Expect these to be written to logs only.
  • trace - more detailed information. Expect these to be written to logs only.

That being said, if you really need to set the log level to TRACE you can try to play with the configuration of logback and use different appenders instead of using the default config provided by gatling.


I recreated the case you detailed in your question locally with the following files

  • build.sbt
ThisBuild / scalaVersion := "2.13.11"

lazy val GatlingSettings = inConfig(Gatling)(Defaults.testSettings)

lazy val root = (project in file("."))
  .enablePlugins(GatlingPlugin)
  .settings(
    name := "stackoverflow-pocs-secondary",
    libraryDependencies ++= Seq(
      "ch.qos.logback"              % "logback-classic"           % "1.4.14",
      "com.typesafe.scala-logging" %% "scala-logging"             % "3.9.5",
      "io.gatling.highcharts"       % "gatling-charts-highcharts" % "3.10.3",
      "io.gatling"                  % "gatling-test-framework"    % "3.10.3",
      "ru.tinkoff"                 %% "gatling-jdbc-plugin"       % "0.10.3",
      "org.postgresql"              % "postgresql"                % "42.7.1"
    ),
    GatlingSettings,
    run / fork := true
  )
  • project/plugins.sbt
addSbtPlugin("io.gatling" % "gatling-sbt" % "4.7.0")
  • src/gatling/BasicSimulation.scala
import io.gatling.core.Predef._
import ru.tinkoff.load.jdbc.Predef._
import ru.tinkoff.load.jdbc.actions.actions._
import ru.tinkoff.load.jdbc.protocol.JdbcProtocolBuilder

import scala.concurrent.duration.DurationInt

class BasicSimulation extends Simulation {

  val scn = scenario("test")
    .exec(ActionsTest.callingJDBC())
    .exitHere
  setUp(scn.inject(rampUsers(800).during(30)))
    .protocols(ActionsTest.database_URL)
    .maxDuration(10.minute)

}

object ActionsTest {
  val database_URL: JdbcProtocolBuilder = DB
    .url("jdbc:postgresql://localhost:5432/postgres")
    .username("admin")
    .password("admin")
    .maximumPoolSize(10)
    .connectionTimeout(10.second)
  def callingJDBC(): QueryActionBuilder =
    jdbc("simple Call")
      .query("select * from dummy_random_data")
      .check(simpleCheck(x => x.length > 2000))
}
  • src/gatling/resources/logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{15} - %msg%n%rEx</pattern>
        </encoder>
        <immediateFlush>false</immediateFlush>
    </appender>

    <appender name="ASYNC-CONSOLE" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="CONSOLE"/>
        <queueSize>16384</queueSize>
        <maxFlushTime>MAX_FLUSH_TIME</maxFlushTime>
        <neverBlock>false</neverBlock>
    </appender>

    <root level="WARN">
        <appender-ref ref="CONSOLE"/>
        <!-- <appender-ref ref="ASYNC-CONSOLE"/> -->
    </root>

</configuration>
  • docker-compose.yaml
version: '3.1'
services:
  db:
    image: postgres:15.4-alpine3.18
    restart: always
    environment:
      - POSTGRES_PASSWORD=admin
      - POSTGRES_USER=admin
      - POSTGRES_DB=postgres
    ports:
      - "5432:5432"

then we need to run the following commands to first start the database, create the table and populate it with some data

# start the database
docker compose up -d

# create the table
docker exec -it stackoverflow-pocs-secondary-db-1 psql postgresql://admin:admin@localhost:5432/postgres -c 'create table dummy_random_data (                                                                                                            ─╯
  id serial not null,
  dummy_string text not null
) ;'

# fill with dummy data
docker exec -it stackoverflow-pocs-secondary-db-1 psql postgresql://admin:admin@localhost:5432/postgres -c 'insert into dummy_random_data(dummy_string) select md5(i::text) from generate_series(1, 100000) s(i);'

and finally we can run the gatling simulation with

sbt Gatling/test

You can run the gatling simulation as many times as you want with different logback configurations. You should be able to see that each time you set the root log level to TRACE there will be some impact in the performance