You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@alex-laycalvert Here's the recommended break-down, but I'm open to discussion:
1. Add the first onAfterCommit hook to MongoDBEventStore
This could be done similarly to what I did for the in-memory one: #152.
That should also enable using the in-memory bus.
2. Define the MongoDBEventStoreConsumer and MongoDBSubscription
You can check PostgreSQL consumer for inspiration (see here). Compared to it, the options should take MongoDBEventStore for now. Thanks to that, the consumer would plug into the onAfterCommit hook. Of course, you don't need to implement polling, as it'll allow you to listen only to upcoming events.
So it'll be just a wrapper, but it'll allow us to plug async projections based on Oplog or change streams later on.
3. Define async projections
Async projections could be plugged as syntactic sugar on top of subscriptions. Subscriptions can have in context mongoClient, which will be useful for passing them to projections.
We'll probably have a similar number of projections as for PostgreSQL inline projections, with the most generic one taking just event(s) and context.
Then, multistream and single stream as Pongo projections
You can also check Pongo handle method that detect changes made to documents (see here) use Pongo to handle operations (whether insert, update or delete). It could be emulated to MongoDB in almost the same way:
Notes
I suggest doing it like that, as I have the general idea of async handling to have the following concepts:
Consumer - responsible for getting the messages from source (e.g. MongoDB, PostgreSQL event store, later on Kafka, SQS, etc.)
Subscription - the actual logic for handling upcoming events from consumers and handling it to target.
Having that, we can decouple consuming messages from the actual logic.
Thanks to that, you could define subscriptions to, e.g., Mongo coming from different sources, not only from the MongoDB event store but also from Kafka, SQS, etc.
@alex-laycalvert Here's the recommended break-down, but I'm open to discussion:
1. Add the first onAfterCommit hook to MongoDBEventStore
This could be done similarly to what I did for the in-memory one: #152.
That should also enable using the in-memory bus.
2. Define the
MongoDBEventStoreConsumer
andMongoDBSubscription
You can check PostgreSQL consumer for inspiration (see here). Compared to it, the options should take MongoDBEventStore for now. Thanks to that, the consumer would plug into the onAfterCommit hook. Of course, you don't need to implement polling, as it'll allow you to listen only to upcoming events.
So it'll be just a wrapper, but it'll allow us to plug async projections based on Oplog or change streams later on.
3. Define async projections
Async projections could be plugged as syntactic sugar on top of subscriptions. Subscriptions can have in context mongoClient, which will be useful for passing them to projections.
We'll probably have a similar number of projections as for PostgreSQL inline projections, with the most generic one taking just event(s) and context.
Then, multistream and single stream as Pongo projections
You can check:
You can also check Pongo handle method that detect changes made to documents (see here) use Pongo to handle operations (whether insert, update or delete). It could be emulated to MongoDB in almost the same way:
Notes
I suggest doing it like that, as I have the general idea of async handling to have the following concepts:
Having that, we can decouple consuming messages from the actual logic.
Thanks to that, you could define subscriptions to, e.g., Mongo coming from different sources, not only from the MongoDB event store but also from Kafka, SQS, etc.
TODO
onAfterCommit
hook #170MongoDBEventStoreConsumer
andMongoDBSubscription
The text was updated successfully, but these errors were encountered: