When I think of the word “imposter” my mind goes to movies where the criminal is revealed after their disguise is removed. But imposters don’t have to be evil geniuses. Sometimes imposters are good. You may buy generic types of pharmaceuticals or swap out beef for a meat-less hamburger. The goal is to get the experience of what you’re after, but through secondary means. In that sense, Azure Event Hubs is a fantastic imposter.
Now to be sure, Azure Event Hubs is a terrific standalone cloud service. If you need to reliably ingest tons of events in a scalable way, there aren’t many (any?) better options.
Microsoft’s gotten into the habit of putting facades onto their cloud services to turn them into credible imposters. For instance, Azure Cosmos DB has its own native interface, but also ones that mimic MongoDB and Apache Cassandra. Azure Event Hubs got into the action by recently adding an Apache Kafka interface. If you have apps or tools that use those interfaces, it’s much easier to adopt the Azure “equivalents” now. Azure isn’t actually offering MongoDB, Cassandra, or Kafka, but their first party services resemble them enough that you don’t have to change code to get the benefits of Azure’s global scale. Good strategy.
Java developers everywhere use Spring Cloud Stream to talk to popular message brokers like RabbitMQ and Apache Kafka. I thought it’d be fun to see if Spring Cloud Stream “just works” with the new Event Hubs interface. LET’S SEE WHAT HAPPENS.
Creating our Azure Event Hub
I started in the Microsoft Azure portal. From there, I chose to add a new Event Hubs namespace which holds all my actual Event Hubs. For kicks, I chose the “basic” pricing tier which allows a single consumer group and a hundred connections. I set the “enable Kafka” flag and configured three throughput units (each Event Hubs partition scales to a maximum of one throughput unit).
Once I had an Event Hubs namespace, I added an Event Hub to it. I gave the Event Hub a name, and three partitions to spread the data. If I had chosen a plan besides “basic”, I would have also been able to set the message retention period beyond one day.
That’s it. Might be the simplest possible way to create a Kafka-like service!
Spring Boot project setup
The whole point of Spring Boot is to eliminate boilerplate code and make it easier to focus on building robust apps. For example, you don’t want to mess with all that broker-specific logic when you want to pass messages or events around. Spring Boot and Spring Cloud Stream make it straightforward.
To get going, I went to start.spring.io. Devs create over 800,000 projects per month from here, so I added two more to the mix. My first project (source code here) added Spring Cloud Stream and Web dependencies. This is my message producer. Then I created a second project (source code here) with the Spring Cloud Stream dependency. This one acts as my message consumer.
This is a Maven project, so I added one more dependency directly to the pom.xml file. This one tells my project to activate the Kafka-specific objects upon application startup.
org.springframework.cloud spring-cloud-stream-binder-kafka
Configuring the producer
With my projects created, it was time to configure them. First up, the producer.
I made this one simple for demo purposes. First, I added Spring Boot annotations on the primary class. The @RestController one exposes annotated operations as HTTP endpoints. The second annotation was @EnableBinding(Source.class) which set this up as a Spring Cloud Stream object that used channels identified in the default “Source” class.
Next, I autowired a Source object that gets autoconfigured by the Spring Boot process at startup. Finally, I defined a method that responds to HTTP POST requests and sends a message to the “output” channel. This is where the message is sent to Event Hubs, but notice that my code is completely unaware of that fact.
@EnableBinding(Source.class) @RestController @SpringBootApplication public class EventHubsKafkaApplication { public static void main(String[] args) { SpringApplication.run(EventHubsKafkaApplication.class, args); } @Autowired Source mySource; @RequestMapping(method=RequestMethod.POST, path="/") public String PublishMessage(@RequestBody String company) { mySource.output().send(MessageBuilder.withPayload(company).build()); return "success"; } }
That’s all the code I needed to test this out. All that was left for my producer was to configure the application.properties file. This lets me set some of the properties needed to connect to my message broker. Before setting them, I went back to the Microsoft Azure portal and grabbed the connection string for my Event Hub.
With that connection string in hand, I set up my application.properties file. First, I defined my brokers. In this case, it’s the fully qualified domain name of my Event Hubs namespace. Next I set the channel destination, in this case, the Event Hub named “eh1.” Finally, I configured my security settings, which includes the connection string.
spring.cloud.stream.kafka.binder.brokers=rseroter-eventhubs-tu1-cg1.servicebus.windows.net:9093 spring.cloud.stream.bindings.output.destination=eh1 spring.cloud.stream.kafka.binder.configuration.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://rseroter-eventhubs-tu1-cg1.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey="; spring.cloud.stream.kafka.binder.configuration.sasl.mechanism=PLAIN spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
With that, the producer app was done.
Configuring the consumer
The consumer app is even simpler! Its main class also has an @EnableBinding annotation, and this one binds to the channels in the default Sink class. Then, it makes use of the very handy @StreamListener which grabs data from the sink and handles content negotiation.
@EnableBinding(Sink.class) @SpringBootApplication public class EventHubsKafkaConsumerApplication { public static void main(String[] args) { SpringApplication.run(EventHubsKafkaConsumerApplication.class, args); } @StreamListener(target=Sink.INPUT) public void logMessages(String msg) { System.out.println("message is: " + msg); }
This was all the code needed to talk to a message broker or event stream processor and do something when a message arrives. Amazing.
The application properties for the consumer are virtually identical to the producer. The only difference is the second property that refers to the input channel, versus the producer that refers to the output channel.
spring.cloud.stream.kafka.binder.brokers=rseroter-eventhubs-tu1-cg1.servicebus.windows.net:9093 spring.cloud.stream.bindings.input.destination=eh1 spring.cloud.stream.kafka.binder.configuration.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://rseroter-eventhubs-tu1-cg1.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey="; spring.cloud.stream.kafka.binder.configuration.sasl.mechanism=PLAIN spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
With that, I had two working apps.
Testing everything
I first ran a simple test. This involved starting up both the producer and the consumer apps. The producer noticed that I had three partitions, and handled accordingly. The consumer recognized the three partitions as well, and joined an anonymous consumer group (as we didn’t specify one).
I kicked up Postman and posted a JSON message to the REST endpoint exposed by my app. I sent in messages with company names of company-1, company-2, and company-3. You can see here that they show up in my consumer app!
To make sure this wasn’t some other witchcraft going on, I checked the Microsoft Azure portal, and sure enough, see the connections and messages counted.
Simple, right?
BONUS: Messing with Kafka Consumer Groups
I wanted to try something else, too. Kafka offers “consumer groups” where a single consumer instance within the group gets the message. This lets you load balance by having multiple instances of your consumer app, without each one getting a copy of the message. Every consumer group gets the message, so the same message may get read many times by different consumer groups. Azure Event Hubs has the same concept, and it behaves the same way. Spring Cloud Stream also offers this abstraction, even when the underlying broker (e.g. RabbitMQ) doesn’t offer it.
I could set the consumer group property directly in the application.properties file. Just add “spring.cloud.stream.bindings.input.group=app1” to it. But I wanted to pass this in at application startup so that I could start up multiple instances of the app with different consumer group values.
I started up an instance of my consumer (java -jar event-hubs-kafka-consumer-0.0.1-SNAPSHOT.jar –spring.cloud.stream.bindings.input.group=app1) and passed in that property. Notice that it read the full log (because this is the first time the consumer group saw it), and the consumer group is indicated in the log.
Interestingly, when I started a second instance, it didn’t grab what was already read by the consumer group (expected), but when I sent in three more events, BOTH instances got a copy (not expected). Saw some other weirdness when I set the “partitioned=true” on the consumer, and provided an “instanceIndex” number for each consumer app instance.
What’s ALSO interesting is that even though I set up an Event Hubs namespace for a single consumer group, I can apparently create as many as I want with the Kafka interface. Here, I started up another instance of my consumer, but with a different consumer group ID (java -jar event-hubs-kafka-consumer-0.0.1-SNAPSHOT.jar –spring.cloud.stream.bindings.input.group=app2), and it also read the full log, as you’d expect from a new consumer group. In fact, I created two new ones (app2, app3) and both worked.
So, it seems like the consumer group limits aren’t applying here. I checked the consumer groups in Azure to see if these were being created behind the scenes, but using the Azure API, I still only saw the $Default one there. I have no idea where the Kafka consumer groups show up but they’re clearly in place. Otherwise, I wouldn’t have seen the correct behavior as each new consumer group came online!
I’ll chalk it up to one of two possible things: (1) I’m a mediocre programmer so I probably screwed something up, or (2) it’s an alpha product and everything might not be wired up just yet. Or both!
Regardless, it’s VERY simple to try out a Kafka-compatible interface on a cloud-hosted service thanks to Azure Event Hubs and Spring Cloud Stream. Kafka users should keep an eye on Azure Event Hubs as a legit option for a cloud-hosted event stream processor.
Categories: Cloud, Microservices, Microsoft Azure, OSS, Pivotal, Spring
The post How to use the Kafka interface of Azure Event Hubs with Spring Cloud Stream appeared first on BizTalkGurus.