Interface DocumentStore
-
public interface DocumentStore
Document storage operations.
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description long
copyIndex(Query<?> query, String newIndex, int bulkBatchSize)
Copy from a source index to a new index taking only the documents matching the given query.long
copyIndex(Class<?> beanType, String newIndex)
Copy the index to a new index.long
copyIndex(Class<?> beanType, String newIndex, long sinceEpochMillis)
Copy entries from an index to a new index but limiting to documents that have been modified since the sinceEpochMillis time.void
createIndex(String indexName, String alias)
Create an index given a mapping file as a resource in the classPath (similar to DDL create table).void
dropIndex(String indexName)
Drop the index from the document store (similar to DDL drop table).<T> T
find(DocQueryContext<T> request)
Return the bean by fetching it's content from the document store.<T> void
findEach(DocQueryContext<T> query, Consumer<T> consumer)
Execute the query against the document store with the expectation of a large set of results that are processed in a scrolling resultSet fashion.void
findEach(String indexNameType, String rawQuery, Consumer<RawDoc> consumer)
Find each processing raw documents.<T> void
findEachWhile(DocQueryContext<T> query, Predicate<T> consumer)
Execute the query against the document store with the expectation of a large set of results that are processed in a scrolling resultSet fashion.void
findEachWhile(String indexNameType, String rawQuery, Predicate<RawDoc> consumer)
Find each processing raw documents stopping when the predicate returns false.<T> List<T>
findList(DocQueryContext<T> request)
Execute the find list query.<T> PagedList<T>
findPagedList(DocQueryContext<T> request)
Execute the query against the document store returning the paged list.void
indexAll(Class<?> beanType)
Update the document store for all beans of this type.<T> void
indexByQuery(Query<T> query)
Update the associated document store using the result of the query.<T> void
indexByQuery(Query<T> query, int bulkBatchSize)
Update the associated document store index using the result of the query additionally specifying a bulkBatchSize to use for sending the messages to ElasticSearch.void
indexSettings(String indexName, Map<String,Object> settings)
Modify the settings on an index.long
process(List<DocStoreQueueEntry> queueEntries)
Process the queue entries sending updates to the document store or queuing them for later processing.
-
-
-
Method Detail
-
indexByQuery
<T> void indexByQuery(Query<T> query)
Update the associated document store using the result of the query.This will execute the query against the database creating a document for each bean graph and sending this to the document store.
Note that the select and fetch paths of the query is set for you to match the document structure needed based on
@DocStore
and@DocStoreEmbedded
so what this query requires is the predicates only.This query will be executed using findEach so it is safe to use a query that will fetch a lot of beans. The default bulkBatchSize is used.
- Parameters:
query
- The query that selects object to send to the document store.
-
indexByQuery
<T> void indexByQuery(Query<T> query, int bulkBatchSize)
Update the associated document store index using the result of the query additionally specifying a bulkBatchSize to use for sending the messages to ElasticSearch.- Parameters:
query
- The query that selects object to send to the document store.bulkBatchSize
- The batch size to use when bulk sending to the document store.
-
indexAll
void indexAll(Class<?> beanType)
Update the document store for all beans of this type.This is the same as indexByQuery where the query has no predicates and so fetches all rows.
-
find
@Nullable <T> T find(DocQueryContext<T> request)
Return the bean by fetching it's content from the document store. If the document is not found null is returned.Typically this is called indirectly by findOne() on the query.
Customer customer = database.find(Customer.class) .setUseDocStore(true) .setId(42) .findOne();
-
findList
<T> List<T> findList(DocQueryContext<T> request)
Execute the find list query. This request is prepared to execute secondary queries.Typically this is called indirectly by findList() on the query that has setUseDocStore(true).
List<Customer> newCustomers = database.find(Customer.class) .setUseDocStore(true) .where().eq("status, Customer.Status.NEW) .findList();
-
findPagedList
<T> PagedList<T> findPagedList(DocQueryContext<T> request)
Execute the query against the document store returning the paged list.The query should have
firstRow
ormaxRows
set prior to calling this method.Typically this is called indirectly by findPagedList() on the query that has setUseDocStore(true).
PagedList<Customer> newCustomers = database.find(Customer.class) .setUseDocStore(true) .where().eq("status, Customer.Status.NEW) .setMaxRows(50) .findPagedList();
-
findEach
<T> void findEach(DocQueryContext<T> query, Consumer<T> consumer)
Execute the query against the document store with the expectation of a large set of results that are processed in a scrolling resultSet fashion.For example, with the ElasticSearch doc store this uses SCROLL.
Typically this is called indirectly by findEach() on the query that has setUseDocStore(true).
database.find(Order.class) .setUseDocStore(true) .where()... // perhaps add predicates .findEach((Order order) -> { // process the bean ... });
-
findEachWhile
<T> void findEachWhile(DocQueryContext<T> query, Predicate<T> consumer)
Execute the query against the document store with the expectation of a large set of results that are processed in a scrolling resultSet fashion.Unlike findEach() this provides the opportunity to stop iterating through the large query.
For example, with the ElasticSearch doc store this uses SCROLL.
Typically this is called indirectly by findEachWhile() on the query that has setUseDocStore(true).
{@code database.find(Order.class) .setUseDocStore(true) .where()... // perhaps add predicates .findEachWhile(new Predicate
() {
-
findEach
void findEach(String indexNameType, String rawQuery, Consumer<RawDoc> consumer)
Find each processing raw documents.- Parameters:
indexNameType
- The full index name and typerawQuery
- The query to executeconsumer
- Consumer to process each document
-
findEachWhile
void findEachWhile(String indexNameType, String rawQuery, Predicate<RawDoc> consumer)
Find each processing raw documents stopping when the predicate returns false.- Parameters:
indexNameType
- The full index name and typerawQuery
- The query to executeconsumer
- Consumer to process each document until false is returned
-
process
long process(List<DocStoreQueueEntry> queueEntries) throws IOException
Process the queue entries sending updates to the document store or queuing them for later processing.- Throws:
IOException
-
dropIndex
void dropIndex(String indexName)
Drop the index from the document store (similar to DDL drop table).DocumentStore documentStore = database.docStore(); documentStore.dropIndex("product_copy");
-
createIndex
void createIndex(String indexName, String alias)
Create an index given a mapping file as a resource in the classPath (similar to DDL create table).DocumentStore documentStore = database.docStore(); // uses product_copy.mapping.json resource // ... to define mappings for the index documentStore.createIndex("product_copy", null);
- Parameters:
indexName
- the name of the new indexalias
- the alias of the index
-
indexSettings
void indexSettings(String indexName, Map<String,Object> settings)
Modify the settings on an index.For example, this can be used be used to set elasticSearch refresh_interval on an index before a bulk update.
// refresh_interval -1 ... disable refresh while bulk loading Map<String,Object> settings = new LinkedHashMap<>(); settings.put("refresh_interval", "-1"); documentStore.indexSettings("product", settings);
// refresh_interval 1s ... restore after bulk loading Map<String,Object> settings = new LinkedHashMap<>(); settings.put("refresh_interval", "1s"); documentStore.indexSettings("product", settings);
- Parameters:
indexName
- the name of the index to update settings onsettings
- the settings to set on the index
-
copyIndex
long copyIndex(Class<?> beanType, String newIndex)
Copy the index to a new index.This copy process does not use the database but instead will copy from the source index to a destination index.
long copyCount = documentStore.copyIndex(Product.class, "product_copy");
- Parameters:
beanType
- The bean type of the source indexnewIndex
- The name of the index to copy to- Returns:
- the number of documents copied to the new index
-
copyIndex
long copyIndex(Class<?> beanType, String newIndex, long sinceEpochMillis)
Copy entries from an index to a new index but limiting to documents that have been modified since the sinceEpochMillis time.To support this the document needs to have a
@WhenModified
property.long copyCount = documentStore.copyIndex(Product.class, "product_copy", sinceMillis);
- Parameters:
beanType
- The bean type of the source indexnewIndex
- The name of the index to copy to- Returns:
- the number of documents copied to the new index
-
copyIndex
long copyIndex(Query<?> query, String newIndex, int bulkBatchSize)
Copy from a source index to a new index taking only the documents matching the given query.// predicates to select the source documents to copy Query<Product> query = database.find(Product.class) .where() .ge("whenModified", new Timestamp(since)) .ge("name", "A") .lt("name", "D") .query(); // copy from the source index to "product_copy" index long copyCount = documentStore.copyIndex(query, "product_copy", 1000);
- Parameters:
query
- The query to select the source documents to copynewIndex
- The target index to copy the documents tobulkBatchSize
- The ElasticSearch bulk batch size, if 0 uses the default.- Returns:
- The number of documents copied to the new index.
-
-