Apr 162012
 

Did you know that bigdata has a built-in REST API for client-server access to the RDF database? We call this interface the “NanoSparqlServer”, and it’s API is outlined in detail on the wiki:

https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlServer

What’s new with the NSS is that we’ve recently added a Java API around it so that you can write client code without having to understand the HTTP API or make HTTP calls directly. This is why there is suddenly a new dependency on Apache’s HTTP Components in the codebase. The Java wrapper is called “RemoteRepository”. If you’re comfortable writing application code against the Sesame SAIL/Repository API you should feel pretty at home with the RemoteRepository class. Not exactly the same, but very very similar.

The class itself is pretty self-explanatory but if you like examples, there is a test case for every API call in RemoteRepository in the class TestNanoSparqlClient. (That test case also conveniently demonstrates how to launch a NanoSparqlServer wrapping a bigdata journal using Jetty, which it does at the beginning of every test.)

Apr 162012
 

I put together a more useful example of how to write a custom SPARQL function with bigdata. It’s up on the wiki here:

https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=CustomFunction

The example details a common use case – filtering out solutions based on security credentials for a particular user. For example, if you wanted to return a list of document visible to the user “John”, you could do it with a custom SPARQL function:

PREFIX ex: <http://www.example.com/>
SELECT ?doc
{
  ?doc rdf:type ex:Document .
  filter(ex:validate(?doc, ?user)) .
}
BINDINGS ?user {
  (ex:John)
}

The function is called by referencing its unique URI, in this case ex:validate. This URI must be registered with bigdata’s FunctionRegistry along with an appropriate factory and operator. The wiki details how to do that. In the query above, the function is called with two arguments, the document to be validated and the user to validate against. The user in this simple example is a constant included in the BINDINGS clause. Always remember that bigdata custom functions are executed one solution at a time – they do not yet benefit from vectored execution and thus are not suitable for reading data from the indices. (The functions must operate without reading from the index on a per execution call basis.) A custom service (distinct from a custom function) is a more appropriate choice when execution requires touching indices. This is how we implement SPARQL 1.1 Federation.

Apr 102012
 

The Graph Data Management 2012 workshop was last week in Washington, DC. The workshop brought together an interesting mixture of people from several different background. There were people people focused on data mining and prediction, people focused on graph algorithms (iterative algorithms over materialized graphs), and several presentations on “graph databases” (3 on RDF databases and one on HyperGraphDB). Many thanks to the workshop organizers for pulling together such an interesting event!

It is clear that the “graph database” space is currently handicapped by a lack of standards. SPARQL can certainly solve many of the problems there, but it lacks a standardized way for dealing with provenance (aka link attributes). We have efficient extensions for this and it sounds like at least Virtuoso will be picking them up as well, so maybe we can drive standardization that way. SPARQL has support for property paths, but it lacks a means to express iterative refinement algorithms so they could be executed efficiently within the database. It is possible to use SPARQL update commands to operate iteratively on data sets on the server without round-tripping large graphs to the client, but it is not yet possible to specify control logic for such updates in a standardized manner, and without extensions which clarify which graphs or solutions should be durable and which should be wired into main memory it is difficult to use SPARQL update for iterative algorithms which assemble an annotated graph. Equally worrisome, it appears that it is not yet possible to create good benchmarks for graph databases right now because the low level APIs wipe out the tremendous advantage which you gain from vectored evaluation in a database.

We will be announcing some new features over the next few weeks and the coming months designed to address some of these issues. The first feature will extend SPARQL 1.1 UPDATE to let you provision and manage solutions sets. A preview of this SPARQL UPDATE extension is published on the bigdata wiki. The extension adds just a little bit of syntax, but a whole lot of power. It was originally envisioned to give people the ability to page through large result sets without re-evaluating complex joins – a use case which is illustrated on the wiki. However, we see lots opportunities beyond an application aware SPARQL cache.

Another feature which will come out later this year is a distributed client/server graph protocol. This is designed to address the tight coupling of applications with graph databases, provide a fast, scalable object level cache for graph data, and provide both fast in-memory traversal on the client and efficient subgraph matching on the server. Clients will also be able to create “graph transactions” and post updates back to the server and write through cache fabric. We plan to have multiple client language bindings for this, providing graph database access within the browser, in Java, etc. We are even looking at a GPU binding for pure computational speed. The language bindings will be generated based on metadata describing the object models.

Apr 012012
 

This is a major version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation.

Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput.

See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7].

Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script.

You can download the WAR from:

http://sourceforge.net/projects/bigdata/

You can checkout this release from:

https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_2_0

New features:

– SPARQL 1.1 UPDATE
– SPARQL 1.1 Service Description
– SPARQL 1.1 Basic Federated Query
– New integration point for custom services (ServiceRegistry).
– Remote Java client for NanoSparqlServer
– Sesame 2.6.3
– Ganglia integration (cluster)
– Performance improvements (cluster)

Feature summary:

– Single machine data storage to ~50B triples/quads (RWStore);
– Clustered data storage is essentially unlimited;
– Simple embedded and/or webapp deployment (NanoSparqlServer);
– Triples, quads, or triples with provenance (SIDs);
– Fast RDFS+ inference and truth maintenance;
– Fast 100% native SPARQL 1.1 evaluation;
– Integrated “analytic” query package;
– %100 Java memory manager leverages the JVM native heap (no GC);

Road map [3]:

– SPARQL 1.1 property paths (last missing feature for SPARQL 1.1);
– Runtime Query Optimizer for Analytic Query mode;
– Simplified deployment, configuration, and administration for clusters; and
– High availability for the journal and the cluster.

Change log:

Note: Versions with (*) MAY require data migration. For details, see [9].

1.2.0: (*)

– http://sourceforge.net/apps/trac/bigdata/ticket/92 (Monitoring webapp)
– http://sourceforge.net/apps/trac/bigdata/ticket/267 (Support evaluation of 3rd party operators)
– http://sourceforge.net/apps/trac/bigdata/ticket/337 (Compact and efficient movement of binding sets between nodes.)
– http://sourceforge.net/apps/trac/bigdata/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak)
– http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers)
– http://sourceforge.net/apps/trac/bigdata/ticket/438 (KeyBeforePartitionException on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/439 (Class loader problem)
– http://sourceforge.net/apps/trac/bigdata/ticket/441 (Ganglia integration)
– http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler)
– http://sourceforge.net/apps/trac/bigdata/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly)
– http://sourceforge.net/apps/trac/bigdata/ticket/446 (HTTP Repostory broken with bigdata 1.1.0)
– http://sourceforge.net/apps/trac/bigdata/ticket/448 (SPARQL 1.1 UPDATE)
– http://sourceforge.net/apps/trac/bigdata/ticket/449 (SPARQL 1.1 Federation extension)
– http://sourceforge.net/apps/trac/bigdata/ticket/451 (Serialization error in SIDs mode on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/454 (Global Row Store Read on Cluster uses Tx)
– http://sourceforge.net/apps/trac/bigdata/ticket/456 (IExtension implementations do point lookups on lexicon)
– http://sourceforge.net/apps/trac/bigdata/ticket/457 (“No such index” on cluster under concurrent query workload)
– http://sourceforge.net/apps/trac/bigdata/ticket/458 (Java level deadlock in DS)
– http://sourceforge.net/apps/trac/bigdata/ticket/460 (Uncaught interrupt resolving RDF terms)
– http://sourceforge.net/apps/trac/bigdata/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension)
– http://sourceforge.net/apps/trac/bigdata/ticket/464 (Query statistics do not update correctly on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/465 (Too many GRS reads on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/469 (Sail does not flush assertion buffers before query)
– http://sourceforge.net/apps/trac/bigdata/ticket/472 (acceptTaskService pool size on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/475 (Optimize serialization for query messages on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree)
– http://sourceforge.net/apps/trac/bigdata/ticket/478 (Cluster does not map input solution(s) across shards)
– http://sourceforge.net/apps/trac/bigdata/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal)
– http://sourceforge.net/apps/trac/bigdata/ticket/481 (PhysicalAddressResolutionException against 1.0.6)
– http://sourceforge.net/apps/trac/bigdata/ticket/482 (RWStore reset() should be thread-safe for concurrent readers)
– http://sourceforge.net/apps/trac/bigdata/ticket/484 (Java API for NanoSparqlServer REST API)
– http://sourceforge.net/apps/trac/bigdata/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache)
– http://sourceforge.net/apps/trac/bigdata/ticket/492 (Empty chunk in ThickChunkMessage (cluster))
– http://sourceforge.net/apps/trac/bigdata/ticket/493 (Virtual Graphs)
– http://sourceforge.net/apps/trac/bigdata/ticket/496 (Sesame 2.6.3)
– http://sourceforge.net/apps/trac/bigdata/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE)
– http://sourceforge.net/apps/trac/bigdata/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.)
– http://sourceforge.net/apps/trac/bigdata/ticket/500 (SPARQL 1.1 Service Description)
– http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output)
– http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1)
– http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY)
– http://sourceforge.net/apps/trac/bigdata/ticket/501 (SPARQL 1.1 BINDINGS are ignored)
– http://sourceforge.net/apps/trac/bigdata/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException)
– http://sourceforge.net/apps/trac/bigdata/ticket/504 (UNION with Empty Group Pattern)
– http://sourceforge.net/apps/trac/bigdata/ticket/505 (Exception when using SPARQL sort & statement identifiers)
– http://sourceforge.net/apps/trac/bigdata/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x)
– http://sourceforge.net/apps/trac/bigdata/ticket/508 (LIMIT causes hash join utility to log errors)
– http://sourceforge.net/apps/trac/bigdata/ticket/513 (Expose the LexiconConfiguration to Function BOPs)
– http://sourceforge.net/apps/trac/bigdata/ticket/515 (Query with two “FILTER NOT EXISTS” expressions returns no results)
– http://sourceforge.net/apps/trac/bigdata/ticket/516 (REGEXBOp should cache the Pattern when it is a constant)
– http://sourceforge.net/apps/trac/bigdata/ticket/517 (Java 7 Compiler Compatibility)
– http://sourceforge.net/apps/trac/bigdata/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.)
– http://sourceforge.net/apps/trac/bigdata/ticket/520 (CONSTRUCT WHERE shortcut)
– http://sourceforge.net/apps/trac/bigdata/ticket/521 (Incremental materialization of Tuple and Graph query results)
– http://sourceforge.net/apps/trac/bigdata/ticket/525 (Modify the IChangeLog interface to support multiple agents)
– http://sourceforge.net/apps/trac/bigdata/ticket/527 (Expose timestamp of LexiconRelation to function bops)
– http://sourceforge.net/apps/trac/bigdata/ticket/532 (ClassCastException during hash join (can not be cast to TermId))
– http://sourceforge.net/apps/trac/bigdata/ticket/533 (Review materialization for inline IVs)
– http://sourceforge.net/apps/trac/bigdata/ticket/534 (BSBM BI Q5 error using MERGE JOIN)

1.1.0 (*)

– http://sourceforge.net/apps/trac/bigdata/ticket/23 (Lexicon joins)
– http://sourceforge.net/apps/trac/bigdata/ticket/109 (Store large literals as “blobs”)
– http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM “how to” in wiki and build.xml are out of date.)
– http://sourceforge.net/apps/trac/bigdata/ticket/203 (Implement an persistence capable hash table to support analytic query)
– http://sourceforge.net/apps/trac/bigdata/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.)
– http://sourceforge.net/apps/trac/bigdata/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without)
– http://sourceforge.net/apps/trac/bigdata/ticket/232 (Bottom-up evaluation semantics).
– http://sourceforge.net/apps/trac/bigdata/ticket/246 (Derived xsd numeric data types must be inlined as extension types.)
– http://sourceforge.net/apps/trac/bigdata/ticket/254 (Revisit pruning of intermediate variable bindings during query execution)
– http://sourceforge.net/apps/trac/bigdata/ticket/261 (Lift conditions out of subqueries.)
– http://sourceforge.net/apps/trac/bigdata/ticket/300 (Native ORDER BY)
– http://sourceforge.net/apps/trac/bigdata/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes)
– http://sourceforge.net/apps/trac/bigdata/ticket/330 (NanoSparqlServer does not locate “html” resources when run from jar)
– http://sourceforge.net/apps/trac/bigdata/ticket/334 (Support inlining of unicode data in the statement indices.)
– http://sourceforge.net/apps/trac/bigdata/ticket/364 (Scalable default graph evaluation)
– http://sourceforge.net/apps/trac/bigdata/ticket/368 (Prune variable bindings during query evaluation)
– http://sourceforge.net/apps/trac/bigdata/ticket/370 (Direct translation of openrdf AST to bigdata AST)
– http://sourceforge.net/apps/trac/bigdata/ticket/373 (Fix StrBOp and other IValueExpressions)
– http://sourceforge.net/apps/trac/bigdata/ticket/377 (Optimize OPTIONALs with multiple statement patterns.)
– http://sourceforge.net/apps/trac/bigdata/ticket/380 (Native SPARQL evaluation on cluster)
– http://sourceforge.net/apps/trac/bigdata/ticket/387 (Cluster does not compute closure)
– http://sourceforge.net/apps/trac/bigdata/ticket/395 (HTree hash join performance)
– http://sourceforge.net/apps/trac/bigdata/ticket/401 (inline xsd:unsigned datatypes)
– http://sourceforge.net/apps/trac/bigdata/ticket/408 (xsd:string cast fails for non-numeric data)
– http://sourceforge.net/apps/trac/bigdata/ticket/421 (New query hints model.)
– http://sourceforge.net/apps/trac/bigdata/ticket/431 (Use of read-only tx per query defeats cache on cluster)

1.0.3

– http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released)
– http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface)
– http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.)
– http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex)
– http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK))
– http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException)
– http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal)
– http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer)
– http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add “context-uri” request parameter to specify the default context for INSERT in the REST API)
– http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment)
– http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API)
– http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail)
– http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer)
– http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out)
– http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out)
– http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit)
– http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L)
– http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI)

1.0.2

– http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.)
– http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM “how to” in wiki and build.xml are out of date.)
– http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.)
– http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.)
– http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.)
– http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.)
– http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.)
– http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.)
– http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.)
– http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0)

1.0.1 (*)

– http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store).
– http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out).
– http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes).
– http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used).
– http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance).
– http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)).
– http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database).
– http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries).
– http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value).
– http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports “FixedAllocator returning null address, with freeBits”.)
– http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.)
– http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j – slf4j bridge.)

For more information about bigdata(R), please see the following links:

[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page
[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted
[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap
[4] http://www.bigdata.com/bigdata/docs/api/
[5] http://sourceforge.net/projects/bigdata/
[6] http://www.bigdata.com/blog
[7] http://www.systap.com/bigdata.htm
[8] http://sourceforge.net/projects/bigdata/files/bigdata/
[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration

About bigdata:

Bigdata