Oracle Integration Cloud – Kafka Adapter with Apache AVRO

The Oracle Integration Cloud (OIC) May 2021 release brought Apache AVRO support to Kafka. This is something a lot of customers asked and it’s widely used.

If this is the first time you use the Kafka Adapter with OIC please check these previous posts – Kafka Adapter for OIC and Kafka Trigger.

What is AVRO

Apache Avro is a binary serialization format. The format is schema-based so, it depends on the definition of schemas in JSON format. These schemas define which fields are mandatory and their types.

Avro data is described in a language-independent schema. The schema is usually described in JSON and the serialization is usually to binary files, although serializing to JSON is also supported. Avro assumes that the schema is present when reading and writing files, usually by embedding the schema in the files themselves.

One of the most interesting features of Avro, and what makes it a good fit for use in a messaging system like Kafka, is that when the application that is writing messages switches to a new schema, the applications reading the data can continue processing messages without requiring any change or update.

Kafka: The Definitive Guide, 2nd Edition

Below a sample AVRO schema.

{"namespace": "com.tech.trantor",
  "type": "record",  "name": "Employee",
    "fields": [
        {"name": "name", "type": "string"},
        {"name": "Surname", "type": "string"},
        {"name": "age",  "type": "int"},
        {"name": "email",  "type": "string"},
        {"name": "phone",  "type": "string"} 
    ]
}

Produce AVRO Message

Let’s create a simple App Driven Integration with a REST trigger as seen below.

Then we add the Kafka Adapter.

We want to Publish data into a topic.

I have a dedicated Topic called DTAvro, and we want to specify the message structure.

In the screen below you see the new option, that allows us to select Avro Schema. I have a file with .avsc extension with below contents.

{"namespace": "com.tech.trantor",
  "type": "record",  "name": "Employee",
    "fields": [
        {"name": "name", "type": "string"},
        {"name": "Surname", "type": "string"},
        {"name": "age",  "type": "int"},
        {"name": "email",  "type": "string"},
        {"name": "phone",  "type": "string"} 
    ]
}

Once that is done we can map the data from the REST Trigger into the Kafka Target.

This is the – very basic – Integration at the end.

Using the TEST capabilities from OIC I trigger the Integration.

And all flows smoothly πŸ™‚

Kafka Console

In Kafka I can then confirm the message with the below command.

./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic DTAvro --from-beginning

This shows all the messages in that Topic, which in my case was only the message produced above.

The steps to consume a message with AVRO format are pretty much the same as to produce.

Conclusion

As mentioned at the beginning, this feature was highly anticipated as it is a market standard. It is available for both the Kafka and the OCI Streaming Service Adapters, for Invoke (Consume/Produce) and Trigger (Consume).

The OIC Kafka/OSS Adapters simplifiy the whole process as one does not need to worry about Serializer and Deserializer – which is something required while using a JAVA client for example.

For more information check the documentation !