Kafka Producer
Producers write data to topics( which are made up of partitions):
We knows that topic holds data, but to send data into the kafka topic we need a kafka producer.
In block diagram:
We have partition 0 and partition 1, partition 2 for topic A. Producer are going to send data into your kafka topic partition
Producer know to which partition to write and which kafka broker has it:
The producer know in advance in which partition the message is going to be written.
The people think that the kafka decides at the end that which partition data get written to, but this is wrong.
The producer decides in advance which partition to write.
In case of kafka broker failures, producer will automatically recover:
In case of kafka server that has a partition failure, the producer know how to automatically recover.
There is a lot of things behind the scenes that happens within kafka.
So, we have load balancing in this case because your producers are going to send data across all partitions based on some mechanism and this is why kafka scales. It's because we have many partitions within a topic and each partition is going to receive message from one or more producers.

Producers: Message Keys
Producers can choose to send a key with the message (string, number, binary etc.):
The message itself contains data but we can add a key and it is optional. The key can be anything you want. It could be a string, a number, binary etc.
If key = null, data is sent round robin (partition 0 then 1 then 2...):
If the key = null, then the data is going to be send round robin, that means it's going to be send to partition 0 then partition 1 then partition 2 and so on. and this is how we get load balancing.
If key != null, then all messages for that key will always go to the same partition:
If the key has some value, it could be a string, a number, a binary etc. what ever you want. The kafka producer have a very important property, that is all the messages that share the same key will always being written to the same partition by using hashing strategy and this property is very important in apache kafka.
A key are typically sent if you need message ordering for a specific field (eg: truck_id):
When we specify a key, this is when we need message ordering for a specific fields.
For example: suppose a goods delivery company have many trucks and company want each truck location at every one minute interval. So the trucks will send their GPS location with truck_id as a key and GPS location as a value.

Kafka Message Serializer
Kafka only accepts bytes as an input from producers and send bytes out as an output to consumer:
Kafka is a technology, which only accept a series or stream of bytes as input from producer and send bytes as an output to consumer.
Message serialization means transforming objects/data into bytes:
When we construct messages they are not bytes, so we are going to perform message serialization. During serialization we are going to transform data or object into bytes.
They are used on the key and the value:
These serializers are going to be used on the key and on the value only.
Common Serializers:
String
Int
Float
Avro
Protobuf
