You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It should be possible to skip/abort indexing for a record during the RecordIndexer._prepare_record stage. There are two options for that, both involving behavior in the receivers of the before_record_index signal:
In the receiver, raise a standardized exception (e.g. raise SkipIndexing()) which will be handled of course appropriately in the _prepare_record method. This solution is backward compatible but uses exceptions (which are slow-ish)
In the receiver, return None, or an empty dictionary ({}). The result of the ._prepare_record should be checked in the .index and ._index_action methods if the resulting data is empty. The problem with this solution is that current implementations of receivers are probably not aware that the passed json dictionary parameter might end up being empty...
The use case behind this is that it might be e.g. that a record gets sent for bulk indexing, but is later (soft)deleted while it's still in the bulk indexing queue. When the bulk indexer consumer runs, it should skip this record.
The text was updated successfully, but these errors were encountered:
It should be possible to skip/abort indexing for a record during the
RecordIndexer._prepare_record
stage. There are two options for that, both involving behavior in the receivers of thebefore_record_index
signal:raise SkipIndexing()
) which will be handled of course appropriately in the_prepare_record
method. This solution is backward compatible but uses exceptions (which are slow-ish)None
, or an empty dictionary ({}
). The result of the._prepare_record
should be checked in the.index
and._index_action
methods if the resulting data is empty. The problem with this solution is that current implementations of receivers are probably not aware that the passedjson
dictionary parameter might end up being empty...The use case behind this is that it might be e.g. that a record gets sent for bulk indexing, but is later (soft)deleted while it's still in the bulk indexing queue. When the bulk indexer consumer runs, it should skip this record.
The text was updated successfully, but these errors were encountered: