I wonder, how do you avoid notifying all services that consume the same event? For example, service A and service B both of them consume event X. Based on some rules you want to send the event X only for service A. I am not talking about consumer groups (kafka) or even correlation-Id. As I am using Event-Driven microservices with the approach of command & event.
Microservice Event driven communication - how to notify the caller only on command / event approach
91 views Asked by Eddy Bayonne At
1
There are 1 answers
Related Questions in MICROSERVICES
- HTTP Requests from SSL Secured(HTTPS) Domain Failing
- Separation of Students and Users in NestJS Microservice architecture
- How to choose port number for various microservices? whatever port number I use is already used-blocked or I'm not able to use them
- Handling feign exception in Spring boot using RestControllerAdvice
- Javers in microservice architecture
- Kafka integration between two micro service which can respond back to the same function initiated the request
- HTTP 401 unauthorized ASP.NET Core Web API microservices
- Minikube tunnel - Ingress not working on windows
- importing class in microservice 1 from another microservice 2
- Eureka Discovery client is not register under API-GATEWAY\host.docker.internal
- Unable to PUT JSON using ADF Dataflow, the error is "the JSON value could not be converted to System.Collections.Generic.List"
- Using Django as API gateway & authorizations service in Microservice
- How to fix HTTPS on express-gateway
- Websocket duplicate on headers
- migrate from django migrations to fastapi alembic
Related Questions in KAFKA-CONSUMER-API
- Data Reading From Kafka
- realtime consume data from kafka to clickhouse
- How to resolve KafkaConnectionError: Socket EVENT_READ without in-flight-requests
- Latest Stable offset in Kafka
- Switch between Kafka topics
- Consuming messages from Kafka topic one by one takes too long time. How can I shorten this time? Is reading multiple messages at one time possible?
- Testing Kafka Producer and Consumer
- Docker-compose: ModuleNotFoundError: No module named 'core'
- Problem with kafka request v3+ serealization. Broker cant deserialize message
- Kafka message not being consumed and offset not committed
- Empty consumer groups are not getting removed from kafka
- Detecting new partitions in a kafka topic
- App info kafka.consumer for group-id unregistered
- How to wrap @KafkaListener for custom method arguments?
- Kafka-Spark Streaming Distributed The group coordinate is not available (Host2:9092(id:2147483645))
Related Questions in DISTRIBUTED-SYSTEM
- How to avoid duplicates with the pull-based subscribe model?
- Micrometer & Prometheus with Java subprocesses that can't expose HTTP
- SQL connection throws error when adding DistributedSession, SessionMiddleware
- How to use NFS locks or any other mechanism to keep data in sync on multiple mountpoints
- The two data nodes return different results
- How to run an MPI program across multiple docker containers without manually ssh'ing
- How do I parallelize writing a list of Pyspark dataframes across all worker nodes?
- Does AWS use distributed systems?
- How to version control a source code which communicates with database?
- Searching for succ(p+1) in Chord systems
- How to design a long running process that can continue after an outtage?
- akka.cluster.ddata.Replicator$Internal$DeltaPropagation message from clusterReceptionist replicator is dropped because it exceeds the size limit
- In the storage-computing separation deployment mode, why does one of the three nodes have no disk space?
- Out-of-order AppendEntries in Raft
- Automatic Load Balancer with Locust 2.20.0 on Windows - High Ping and Scaling Challenges
Related Questions in SPRING-CLOUD-STREAM-BINDER-KAFKA
- How to JUnit test KStream related scenarios with spring-cloud-stream kafka streams binder? Any example available with KStream, KTable?
- spring-cloud-stream-binder-kafka-streams consumer shuts down when RuntimeException occurs
- Spring cloud stream kafka committing event on error
- Spring Cloud Stream Multiple Kafka Clusters Configuration
- Spring cloud stream kafka binder, no messages in DLQ topic for Leyton version
- Spring cloud stream project, kafka DLQ configuration is ignored for Leyton version
- Spring cloud stream batch producer throw a `SerializationException`
- Kafka Type headers not removed by producer/consumer
- How to bind the same topic to multiple functions with the Kafka Streams binder in Spring Cloud Stream?
- spring cloud stream kafka DLQ producer config for max.request.size -- more than 1MB failing to produnce message to DLQ - RecordTooLargeException
- Spring cloud stream doesn't deserialise kafka message automatically for specified configuration
- Dynamic batch mode for kafka spring cloud
- GraalVM compiled version of Spring Cloud Stream Kafka with Protobuf not able to serializer
- Transferring to DLQ on Production Exception and Deserialization with Spring Cloud Stream Kafka Streams Binder
- How to change state store type of KTable from KeyValueStore to VersionedKeyValueStore in Spring Cloud Stream Kafka Streams binder?
Related Questions in ASYNCHRONOUS-MESSAGING-PROTOCOL
- Why AMP show me error in search console to google
- Parse Javascript variable value to JSON object
- email amp not showing fields conditionally
- how can i conditionally show questions in amp email
- Usage of nofollow links in AMP
- Malformed URL found for attribute ‘publisher-logo-src’ in tag ‘amp-story’
- Validate script in amp
- vast / vpaid video can run amp-video ads?
- How to add AMP User Events with NextJs
- The amp-next-page functionality does not work, namely loading the following content
- how could I make this working in amp page?
- how to set limit of visible elements with show more button in <amp-list>?
- Parallel function call in php
- amp email sent via sendGrid v3 api not loading
- How to Remove 404 AMP pages from Google search Console for wordpress AMP plugin
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
I think it's quite easy by using Kafka partitions/partition-key. For example: In your topic X, you could just create it with many partitions. For each invocation, every service must specify its key, so based on the key, Kafka will do the rest of the job. So every time Service A sends a command, the consumer Service (the one who handles the command) will send the event with the same key. So in the end, Service A (the producer of the command) will receive the event on its own partition and will be the only service receiving it. So based on the Command/Event approach it may work.
On the other hand, by doing so, you are limiting one of the main benefits of partition which is allowing the scalability.