kafka-client
Here are 142 public repositories matching this topic...
-
Updated
Nov 23, 2020 - Go
-
Updated
Nov 20, 2020 - Go
-
Updated
Nov 24, 2020 - Python
-
Updated
Nov 22, 2020 - C#
SQL Insert Statement
Current behavior:
All the SQL activities either don't support Insert or are specific to a usecase
Expected behavior:
to be able to insert to a sql database in an activity
What is the motivation / use case for changing the behavior?
many workflows/pipelines require logging to a database
Additional information you deem important (e.g. I need this tomorrow):
Is your feature request related to a problem? Please describe.
Currently, there are many errors that do not provide certain metadata. For example, TOPIC_AUTHORIZATION_FAILED does not provide the topic(s) that failed to authorize. We provide our producer with multiple brokers and use sendBatch to send to multiple topics. It appears that some messages go through to one broker but not another bu
-
Updated
Nov 20, 2020 - Scala
-
Updated
Nov 17, 2020 - Ruby
-
Updated
Oct 13, 2020 - Java
-
Updated
Nov 17, 2020 - Rust
-
Updated
Aug 31, 2020 - Python
Typo
The error message "Hostname could not be found in context. HostNamePartitioningStrategy will not work." and variable name "hostname" are weird.
ContextNameKeyingStrategy: <-- problem code
@Override
public void setContext(Context context) {
super.setContext(context);
final String hostname = context.getProperty(CoreConstants.CONTEXT_NAME_KEY);
if (hostnambased on: https://kafka.js.org/docs/configuration and tulios/kafkajs#298
We may not have the correct settings for the JSConsumer and JSProducer. This issue is to ensure we have them up to date after nodefluent/node-sinek#154 has been merged
-
Updated
Nov 14, 2020 - Clojure
-
Updated
Apr 22, 2020 - Java
Similarly to #234, it would be useful to provide functions for creating test KafkaProducers.
A good first function would be one which yields somewhat sensible default RecordMetadata.
object KafkaProducer {
def unit[F[_], K, V](implicit F: Sync[F]): F[KafkaProducer[F, K, V]] = ???
}Likely, this would require some internal state, hence F[KafkaProducer[F, K, V]].
-
Updated
Nov 4, 2020 - Ruby
-
Updated
Jan 19, 2018 - Java
-
Updated
Nov 17, 2020 - Java
-
Updated
Nov 13, 2020 - Java
-
Updated
Oct 28, 2020 - Java
-
Updated
Jul 29, 2020 - Java
-
Updated
Mar 14, 2020 - Scala
When draining some brokers from its topics, it could be useful to have a json_assignment that lets the user specify how to reassign topic partitions in the cluster.
Improve this page
Add a description, image, and links to the kafka-client topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the kafka-client topic, visit your repo's landing page and select "manage topics."
https://github.com/collectd/collectd/blob/0b2796dfa3b763ed10194ccd66b39b1e056da9b9/src/mysql.c#L772
Hi,
As I saw in the source for the mysql plugin, the collector specifically ignors the Prepared_stmt_count variable.
I would like to have that in the output for collectd as well.
Is it possible to enable this key in the collectd mysql collector?
Unfortunately my C skills are pretty near zer