Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 11 2015

08:51
And what about the workers?

June 10 2015

16:03

In Hoc Signo Vinces (part 21 of n): Running TPC-H on Virtuoso Elastic Cluster on Amazon EC2

We have made an Amazon EC2 deployment of Virtuoso 7 Commercial Edition, configured to use the Elastic Cluster Module with TPC-H preconfigured, similar to the recently published OpenLink Virtuoso Benchmark AMI running the Open Source Edition. The details of the new Elastic Cluster AMI and steps to use it will be published in a forthcoming post. Here we will simply look at results of running TPC-H 100G scale on two machines, and 1000G scale on four machines. This shows how Virtuoso provides great performance on a cloud platform. The extremely fast bulk load — 33 minutes for a terabyte! — means that you can get straight to work even with on-demand infrastructure.

In the following, the Amazon instance type is R3.8xlarge, each with dual Xeon E5-2670 v2, 244G RAM, and 2 x 300G SSD. The image is made from the Amazon Linux with built-in network optimization. We first tried a RedHat image without network optimization and had considerable trouble with the interconnect. Using network-optimized Amazon Linux images inside a virtual private cloud has resolved all these problems.

The network optimized 10GE interconnect at Amazon offers throughput close to the QDR InfiniBand running TCP-IP; thus the Amazon platform is suitable for running cluster databases. The execution that we have seen is not seriously network bound.

100G on 2 machines, with a total of 32 cores, 64 threads, 488 GB RAM, 4 x 300 GB SSD

Load time: 3m 52s Run Power Throughput Composite 1 523,554.3 590,692.6 556,111.2 2 565,353.3 642,503.0 602,694.9

1000G on 4 machines, with a total of 64 cores, 128 threads, 976 GB RAM, 8 x 300 GB SSD

Load time: 32m 47s Run Power Throughput Composite 1 592,013.9 754,107.6 668,163.3 2 896,564.1 828,265.4 861,738.4 3 883,736.9 829,609.0 856,245.3

For the larger scale we did 3 sets of power + throughput tests to measure consistency of performance. By the TPC-H rules, the worst (first) score should be reported. Even after bulk load, this is markedly less than the next power score due to working set effects. This is seen to a lesser degree with the first throughput score also.

The numerical quantities summaries are available in a report.zip file, or individually --

Subsequent posts will explain how to deploy Virtuoso Elastic Clusters on AWS.

In Hoc Signo Vinces (TPC-H) Series

June 09 2015

15:51

Introducing the OpenLink Virtuoso Benchmarks AMI on Amazon EC2

The OpenLink Virtuoso Benchmarks AMI is an Amazon EC2 machine image with the latest Virtuoso open source technology preconfigured to run —

  • TPC-H , the classic of SQL data warehousing

  • LDBC SNB, the new Social Network Benchmark from the Linked Data Benchmark Council

  • LDBC SPB, the RDF/SPARQL Semantic Publishing Benchmark from LDBC

This package is ideal for technology evaluators and developers interested in getting the most performance out of Virtuoso. This is also an all-in-one solution to any questions about reproducing claimed benchmark results. All necessary tools for building and running are included; thus any developer can use this model installation as a starting point. The benchmark drivers are preconfigured with appropriate settings, and benchmark qualification tests can be run with a single command.

The Benchmarks AMI includes a precompiled, preconfigured checkout of the v7fasttrack github repository, checkouts of the github repositories of the benchmarks, and a number of running directories with all configuration files preset and optimized. The image is intended to be instantiated on a R3.8xlarge Amazon instance with 244G RAM, dual Xeon E5-2670 v2, and 600G SSD.

Benchmark datasets and preloaded database files can be downloaded from S3 when large, and generated as needed on the instance when small. As an alternative, the instance is also set up to do all phases of data generation and database bulk load.

The following benchmark setups are included:

  • TPC-H 100G
  • TPC-H 300G
  • LDBC SNB Validation
  • LDBC SNB Interactive 100G
  • LDBC SNB Interactive 300G (SF3)
  • LDBC SPB Validation
  • LDBC SPB Basic 256 Mtriples (SF5)
  • LDBC SPB Basic 1 Gtriple

The AMI will be expanded as new benchmarks are introduced, for example, the LDBC Social Network Business Intelligence or Graph Analytics.

To get started:

  1. Instantiate machine image ami-5304ef38 (AMI ID is subject to change; you should be able to find the latest by searching for "OpenLink Virtuoso Benchmarks" in "Community AMIs") with a R3.8xlarge instance.

  2. Connect via ssh.

  3. See the README (also found in the ec2-user's home directory) for full instructions on getting up and running.

15:24

SNB Interactive, Part 3: Choke Points and Initial Run on Virtuoso

In this post we will look at running the LDBC SNB on Virtuoso.

First, let's recap what the benchmark is about:

  1. fairly frequent short updates, with no update contention worth mentioning
  2. short random lookups
  3. medium complex queries centered around a person's social environment

The updates exist so as to invalidate strategies that rely too heavily on precomputation. The short lookups exist for the sake of realism; after all, an online social application does lookups for the most part. The medium complex queries are to challenge the DBMS.

The DBMS challenges have to do firstly with query optimization, and secondly with execution with a lot of non-local random access patterns. Query optimization is not a requirement, per se, since imperative implementations are allowed, but we will see that these are no more free of the laws of nature than the declarative ones.

The workload is arbitrarily parallel, so intra-query parallelization is not particularly useful, if also not harmful. There are latency constraints on operations which strongly encourage implementations to stay within a predictable time envelope regardless of specific query parameters. The parameters are a combination of person and date range, and sometimes tags or countries. The hardest queries have the potential to access all content created by people within 2 steps of a central person, so possibly thousands of people, times 2000 posts per person, times up to 4 tags per post. We are talking in the millions of key lookups, aiming for sub-second single-threaded execution.

The test system is the same as used in the TPC-H series: dual Xeon E5-2630, 2x6 cores x 2 threads, 2.3GHz, 192 GB RAM. The software is the feature/analytics branch of v7fasttrack, available from www.github.com.

The dataset is the SNB 300G set, with:

1,136,127 persons 125,249,604 knows edges 847,886,644 posts , including replies 1,145,893,841 tags of posts or replies 1,140,226,235 likes of posts or replies

As an initial step, we run the benchmark as fast as it will go. We use 32 threads on the driver side for 24 hardware threads.

Below are the numerical quantities for a 400K operation run after 150K operations worth of warmup.

Duration: 10:41.251 Throughput: 623.71 (op/s)

The statistics that matter are detailed below, with operations ranked in order of descending client-side wait-time. All times are in milliseconds.

% of total total_wait name count mean min max 20     % 4,231,130 LdbcQuery5 656 6,449.89    245 10,311 11     % 2,272,954 LdbcQuery8 18,354 123.84    14 2,240 10     % 2,200,718 LdbcQuery3 388 5,671.95    468 17,368 7.3   % 1,561,382 LdbcQuery14 1,124 1,389.13    4 5,724 6.7   % 1,441,575 LdbcQuery12 1,252 1,151.42    15 3,273 6.5   % 1,396,932 LdbcQuery10 1,252 1,115.76    13 4,743 5     % 1,064,457 LdbcShortQuery3PersonFriends 46,285 22.9979  0 2,287 4.9   % 1,047,536 LdbcShortQuery2PersonPosts 46,285 22.6323  0 2,156 4.1   % 885,102 LdbcQuery6 1,721 514.295   8 5,227 3.3   % 707,901 LdbcQuery1 2,117 334.389   28 3,467 2.4   % 521,738 LdbcQuery4 1,530 341.005   49 2,774 2.1   % 440,197 LdbcShortQuery4MessageContent 46,302 9.50708 0 2,015 1.9   % 407,450 LdbcUpdate5AddForumMembership 14,338 28.4175  0 2,008 1.9   % 405,243 LdbcShortQuery7MessageReplies 46,302 8.75217 0 2,112 1.9   % 404,002 LdbcShortQuery6MessageForum 46,302 8.72537 0 1,968 1.8   % 387,044 LdbcUpdate3AddCommentLike 12,659 30.5746  0 2,060 1.7   % 361,290 LdbcShortQuery1PersonProfile 46,285 7.80577 0 2,015 1.6   % 334,409 LdbcShortQuery5MessageCreator 46,302 7.22234 0 2,055 1     % 220,740 LdbcQuery2 1,488 148.347   2 2,504 0.96  % 205,910 LdbcQuery7 1,721 119.646   11 2,295 0.93  % 198,971 LdbcUpdate2AddPostLike 5,974 33.3062  0 1,987 0.88  % 189,871 LdbcQuery11 2,294 82.7685  4 2,219 0.85  % 182,964 LdbcQuery13 2,898 63.1346  1 2,201 0.74  % 158,188 LdbcQuery9 78 2,028.05    1,108 4,183 0.67  % 143,457 LdbcUpdate7AddComment 3,986 35.9902  1 1,912 0.26  % 54,947 LdbcUpdate8AddFriendship 571 96.2294  1 988 0.2   % 43,451 LdbcUpdate6AddPost 1,386 31.3499  1 2,060 0.0086% 1,848 LdbcUpdate4AddForum 103 17.9417  1 65 0.0002% 44 LdbcUpdate1AddPerson 2 22       10 34

At this point we have in-depth knowledge of the choke points the benchmark stresses, and we can give a first assessment of whether the design meets its objectives for setting an agenda for the coming years of graph database development.

The implementation is well optimized in general but still has maybe 30% room for improvement. We note that this is based on a compressed column store. One could think that alternative data representations, like in-memory graphs of structs and pointers between them, are better for the task. This is not necessarily so; at the least, a compressed column store is much more space efficient. Space efficiency is the root of cost efficiency, since as soon as the working set is not in memory, a random access workload is badly hit.

The set of choke points (technical challenges) actually revealed by the benchmark is so far as follows:

  • Cardinality estimation under heavy data skew — Many queries take a tag or a country as a parameter. The cardinalities associated with tags vary from 29M posts for the most common to 1 for the least common. Q6 has a common tag (in top few hundred) half the time and a random, most often very infrequent, one the rest of the time. A declarative implementation must recognize the cardinality implications from the literal and plan accordingly. An imperative one would have to count. Missing this makes Q6 take about 40% of the time instead of 4.1% when adapting.

  • Covering indices — Being able to make multi-column indices that duplicate some columns from the table often saves an entire table lookup. For example, an index on post by author can also contain the post's creation date.

  • Multi-hop graph traversal — Most queries access a two-hop environment starting at a person. Two queries look for shortest paths of unbounded length. For the two-hop case, it makes almost no difference whether this is done as a union or a special graph traversal operator. For shortest paths, this simply must be built into the engine; doing this client-side incurs prohibitive overheads. A bidirectional shortest path operation is a requirement for the benchmark.

  • Top K Most queries returning posts order results by descending date. Once there are at least k results, anything older than the kth can be dropped, adding a date selection as early as possible in the query. This interacts with vectored execution, so that starting with a short vector size more rapidly produces an initial top k.

  • Late projection — Many queries access several columns and touch millions of rows but only return a few. The columns that are not used in sorting or selection can be retrieved only for the rows that are actually returned. This is especially useful with a column store, as this removes many large columns (e.g., text of a post) from the working set.

  • Materialization — Q14 accesses an expensive-to-compute edge weight, the number of post-reply pairs between two people. Keeping this precomputed drops Q14 from the top place. Other materialization would be possible, for example Q2 (top 20 posts by friends), but since Q2 is just 1% of the load, there is no need. One could of course argue that this should be 20x more frequent, in which case there could be a point to this.

  • Concurrency control — Read-write contention is rare, as updates are randomly spread over the database. However, some pages get read very frequently, e.g., some middle level index pages in the post table. Keeping a count of reading threads requires a mutex, and there is significant contention on this. Since the hot set can be one page, adding more mutexes does not always help. However, hash partitioning the index into many independent trees (as in the case of a cluster) helps for this. There is also contention on a mutex for assigning threads to client requests, as there are large numbers of short operations.

In subsequent posts, we will look at specific queries, what they in fact do, and what their theoretical performance limits would be. In this way we will have a precise understanding of which way SNB can steer the graph DB community.

SNB Interactive Series

June 08 2015

00:12

Some introductory presentations for CKAN

Reposted from the CKAN Association LinkedIn group. Feel free to join if you use LinkedIn.

Thanks to Augusto Herrmann Batista and OK Brazil for allowing the following repost:

I recently presented a couple of “lightning courses” to introduce an audience to CKAN.

One was at the Linked Open Data Brasil conference in Florianópolis, Brazil, on November 2014. It’s in Portuguese language.

http://www.slideshare.net/AugustoHerrmannBatis/minicurso-de-ckan

The other one was presented at the IV Moscow Urban Forum, in Russia, on December 2014. This one is in English.

http://www.slideshare.net/AugustoHerrmannBatis/ckan-overview

Feel free to share and reuse, as they are CC-BY.

June 03 2015

16:51

The Virtuoso Science Library

There is a lot of scientific material on Virtuoso, but it has not been presented all together in any one place. So I am making here a compilation of the best resources with a paragraph of introduction on each. Some of these are project deliverables from projects under the EU FP7 programme; some are peer-reviewed publications.

For the future, an updated version of this list may be found on the main Virtuoso site.

European Project Deliverables

  • GeoKnow D 2.6.1: Graph Analytics in the DBMS (2015-01-05)

    This introduces the idea of unbundling basic cluster DBMS functionality like cross partition joins and partitioned group by to form a graph processing framework collocated with the data.

  • GeoKnow D2.4.1: Geospatial Clustering and Characteristic Sets (2015-01-06)

    This presents experimental results of structure-aware RDF applied to geospatial data. The regularly structured part of the data goes in tables; the rest is triples/quads. Furthermore, for the first time in the RDF space, physical storage location is correlated to properties of entities, in this case geo location, so that geospatially adjacent items are also likely adjacent in the physical data representation.

  • LOD2 D2.1.5: 500 billion triple BSBM (2014-08-18)

    This presents experimental results on lookup and BI workloads on Virtuoso cluster with 12 nodes, for a total of 3T RAM and 192 cores. This also discusses bulk load, at up to 6M triples/s and specifics of query optimization in scale-out settings.

  • LOD2 D2.6: Parallel Programming in SQL (2012-08-12)

    This discusses ways of making SQL procedures partitioning-aware, so that one can, map-reduce style, send parallel chunks of computation to each partition of the data.

Publications

2015

  • Minh-Duc, Pham, Linnea, P., Erling, O., and Boncz, P.A. "Deriving an Emergent Relational Schema from RDF Data," WWW, 2015.

    This paper shows how RDF is in fact structured and how this structure can be reconstructed. This reconstruction then serves to create a physical schema, reintroducing all the benefits of physical design to the schema-last world. Experiments with Virtuoso show marked gains in query speed and data compactness.

2014

2013

2012

  • Orri Erling: Virtuoso, a Hybrid RDBMS/Graph Column Store. IEEE Data Eng. Bull. (DEBU) 35(1):3-8 (2012)

    This paper introduces the Virtuoso column store architecture and design choices. One design is made to serve both random updates and lookups as well as the big scans where column stores traditionally excel. Examples are given from both TPC-H and the schema-less RDF world.

  • Minh-Duc Pham, Peter A. Boncz, Orri Erling: S3G2: A Scalable Structure-Correlated Social Graph Generator. TPCTC 2012:156-172

    This paper presents the basis of the social network benchmarking technology later used in the LDBC benchmarks.

2011

2009

  • Orri Erling, Ivan Mikhailov: Faceted Views over Large-Scale Linked Data. LDOW 2009

    This paper introduces anytime query answering as an enabling technology for open-ended querying of large data on public service end points. While not every query can be run to completion, partial results can most often be returned within a constrained time window.

  • Orri Erling, Ivan Mikhailov: Virtuoso: RDF Support in a Native RDBMS. Semantic Web Information Management 2009:501-519

    This is a general presentation of how a SQL engine needs to be adapted to serve a run-time typed and schema-less workload.

2008

2007

  • Orri Erling, Ivan Mikhailov: RDF Support in the Virtuoso DBMS. CSSW 2007:59-68

    This is an initial discussion of RDF support in Virtuoso. Most specifics are by now different but this can give a historical perspective.

May 14 2015

15:37

SNB Interactive, Part 2 - Modeling Choices

SNB Interactive is the wild frontier, with very few rules. This is necessary, among other reasons, because there is no standard property graph data model, and because the contestants support a broad mix of programming models, ranging from in-process APIs to declarative query.

In the case of Virtuoso, we have played with SQL and SPARQL implementations. For a fixed schema and well known workload, SQL will always win. The reason is that SQL allows materialization of multi-part indices and data orderings that make sense for the application. In other words, there is transparency into physical design. An RDF/SPARQL-based application may also have physical design by means of structure-aware storage, but this is more complex and here we are just concerned with speed and having things work precisely as we intend.

Schema Design

SNB has a regular schema described by a UML diagram. This has a number of relationships, of which some have attributes. There are no heterogenous sets, i.e., no need for run-time typed attributes or graph edges with the same label but heterogenous end-points. Translation into SQL or SPARQL is straightforward. Edges with attributes (e.g., the foaf:knows relation between people) would end up represented as a subject with the end points and the effective date as properties. The relational implementation has a two-part primary key and the effective date as a dependent column. A native property graph database would use an edge with an extra property for this, as such are typically supported.

The only table-level choice has to do with whether posts and comments are kept in the same or different data structures. The Virtuoso schema uses a single table for both, with nullable columns for the properties that occur only in one. This makes the queries more concise. There are cases where only non-reply posts of a given author are accessed. This is supported by having two author foreign key columns each with its own index. There is a single nullable foreign key from the reply to the post/comment being replied to.

The workload has some frequent access paths that need to be supported by index. Some queries reward placing extra columns in indices. For example, a common pattern is accessing the most recent posts of an author or a group of authors. There, having a composite key of ps_creatorid, ps_creationdate, ps_postid pays off since the top-k on creationdate can be pushed down into the index without needing a reference to the table.

The implementation is free to choose data types for attributes, particularly datetimes. The Virtuoso implementation adopts the practice of the Sparksee and Neo4j implementations and represents this is a count of milliseconds since epoch. This is less confusing, faster to compare, and more compact than a native datetime datatype that may or may not have timezones, etc. Using a built-in datetime seems to be nearly always a bad idea. A dimension table or a number for a time dimension avoids the ambiguities of a calendar or at least makes these explicit.

The benchmark allows procedurally maintained materializations of intermediate results for use by queries as long as these are maintained transaction-by-transaction. For example, each person could have the 20 newest posts by their immediate contacts precomputed. This would reduce Q2 "top of the wall" to a single lookup. This does not however appear to be worthwhile. The Virtuoso implementation does do one such materialization for Q14: A connection weight is calculated for every pair of persons that know each other. This is related to the count of replies by either to content generated by the other. If there does not exist a single reply in either direction, the weight is taken to be 0. This weight is precomputed after bulk load and subsequently maintained each time a reply is added. The table for this is the only row-wise structure in the schema and represents a half-matrix of connected people, i.e., person1, person2 -> weight. Person1 is by convention the one with the smaller p_personid. Note that comparing IDs in this way is useful but not normally supported by SPARQL/RDF systems. SPARQL would end up comparing strings of URIs with disastrous performance implications unless an implementation-specific trick were used.

In the next installment, we will analyze an actual run.

15:37

SNB Interactive, Part 1 - What is SNB Interactive Really About?

This is the first in a series of blog posts analyzing the Interactive workload of the LDBC Social Network Benchmark. This is written from the dual perspective of participating in the benchmark design, and of building the OpenLink Virtuoso implementation of same.

With two implementations of SNB Interactive at four different scales, we can take a first look at what the benchmark is really about. The hallmark of a benchmark implementation is that its performance characteristics are understood; even if these do not represent the maximum of the attainable, there are no glaring mistakes; and the implementation represents a reasonable best effort by those who ought to know such, namely the system vendors.

The essence of a benchmark is a set of trick questions or "choke points," as LDBC calls them. A number of these were planned from the start. It is then the role of experience to tell whether addressing these is really the key to winning the race. Unforeseen ones will also surface.

So far, we see that SNB confronts the implementor with choices in the following areas:

  • Data model — Tabular relational (commonly known as SQL), graph relational (including RDF), property graph, etc.

  • Physical storage model — Row-wise vs. column-wise, for instance.

  • Ordering of materialized data — Sorted projections, composite keys, replicating columns in auxiliary data structures, etc.

  • Persistence of intermediate results —  Materialized views, triggers, precomputed temporary tables, etc.

  • Query optimization — join order/type, interesting physical data orderings, late projection, top k, etc.

  • Parameters vs. literals — Sometimes different parameter values result in different optimal query plans.

  • Predictable, uniform latency — Measurement rules stipulate the the SUT (system under test) must not fall behind the simulated workload.

  • Durability — How to make data durable while maintaining steady throughput, e.g., logging, checkpointing, etc.

In the process of making a benchmark implementation, one naturally encounters questions about the validity, reasonability, and rationale of the benchmark definition itself. Additionally, even though the benchmark might not directly measure certain aspects of a system, making an implementation will take a system past its usual envelope and highlight some operational aspects.

  • Data generation — Generating a mid-size dataset takes time, e.g., 8 hours for 300G. In a cloud situation, keeping the dataset in S3 or similar is necessary; re-generating every time is not an option.

  • Query mix — Are the relative frequencies of the operations reasonable? What bias does this introduce?

  • Uniformity of parameters — Due to non-uniform data distributions in the dataset, there is easily a 100x difference between "fast" and "slow" cases of a single query template. How long does one need to run to balance these fluctuations?

  • Working set — Experience shows that there is a large difference between almost-warm and steady-state of working set. This can be a factor of 1.5 in throughput.

  • Reasonability of latency constraints — In the present case, a qualifying run must have no more than 5% of all query executions starting over 1 second late. Each execution is scheduled beforehand and done at the intended time. If the SUT does not keep up, it will have all available threads busy and must finish some work before accepting new work, so some queries will start late. Is this a good criterion for measuring consistency of response time? There are some obvious possibilities for abuse.

  • Ease of benchmark implementation/execution — Perfection is open-ended and optimization possibilities infinite, albeit with diminishing returns. Still, getting started should not be too hard. Since systems will be highly diverse, testing that these in fact do the same thing is important. The SNB validation suite is good for this and, given publicly available reference implementations, the effort of getting started is not unreasonable.

  • Ease of adjustment — Since a qualifying run must meet latency constraints while going as fast as possible, setting the performance target involves trial and error. Does the tooling make this easy?

  • Reasonability of durability rule — Right now, one is not required to do checkpoints but must report the time to roll forward from the last checkpoint or initial state. Inspiring vendors to build faster recovery is certainly good, but we are not through with all the implications. What about redundant clusters?

The following posts will look at the above in light of actual experience.

May 05 2015

15:13

Thoughts on KOS (Part 3): Trends in knowledge organization

The accelerating pace of change in the economic, legal and social environment combined with tendencies towards increased decentralization of organizational structures have had a profound impact on the way we organize and utilize and organize knowledge. The internet as we know it today and especially the World Wide Web as the multimodal interface for the presentation and consumption of multimedia information are the most prominent examples of these developments. To illustrate the impact of new communication technologies on information practices Saumure & Shiri (2008) conducted a survey on knowledge organization trends in the Library and Information Sciences before and after the emergence of the World Wide Web. Table 1 shows their results.

kos trends

 

 

 

 

 

 

 

The survey illustrates three major trends: 1) the spectrum of research areas has broadened significantly from originally complex and expert-driven methodologies and systems to more light-weight, application-oriented approaches; 2) while certain research areas have kept their status over the years (i.e. Cataloguing & Classification or Machine Assisted Knowledge Organization), new areas of research have gained importance (i.e. Metadata Applications & Uses, Classifying Web Information, Interoperability Issues) while formerly prevalent topics like Cognitive Models or Indexing have declined in importance or dissolved into other areas; and 3) the quantity of papers that are explicitly and implicitly dealing with metadata issues have significantly increased.

These insights coincide with a survey conducted by The Economist (2010) that comes to the conclusion that metadata has become a key enabler in the creation of controllable and exploitable information ecosystems under highly networked circumstances. Metadata provide information about data, objects and concepts. This information can be descriptive, structural or administrative. Metadata adds value to data sets by providing structure (i.e. schemas) and increasing the expressivity (i.e. controlled vocabularies) of a dataset.

According to Weibel & Lagoze (1997, p. 177):

“[the] association of standardized descriptive metadata with networked objects has the potential for substantially improving resource discovery capabilities by enabling field-based (e.g., author, title) searches, permitting indexing of non-textual objects, and allowing access to the surrogate content that is distinct from access to the content of the resource itself.”

These trends influence the functional requirements of the next generation’s Knowledge Organization Systems (KOSs) as a support infrastructure for knowledge sharing and knowledge creation under conditions of distributed intelligence and competence.

Go to previous posts in this series:
Thoughts on KOS (Part1): Getting to grips with “semantic” interoperability or
Thoughts on KOS (Part 2): Classifying Knowledge Organisation Systems

 

References

Saumure, Kristie; Shiri, Ali (2008). Knowledge organization trends in library and information studies: a preliminary comparison of pre- and post-web eras. In: Journal of Information Science, 34/5, 2008, pp. 651–666

The Economist (2010). Data, data everywhere. A special report on managing information. http://www.emc.com/collateral/analyst-reports/ar-the-economist-data-data-everywhere.pdf, accessed 2013-03-10

Weibel, S. L., & Lagoze, C. (1997). An element set to support resource discovery. In: International Journal on Digital Libraries, 1/2, pp. 176-187

15:07

PoolParty 5.1 comes with integrated Graph Search feature

PoolParty GraphSearch SWC has launched PoolParty Semantic Suite Version 5.1, its taxonomy management and knowledge graph management software platform.

Version 5.1 offers several new features including an ontology publishing module and an integrated graph based search application, which shows instantly how changes on the taxonomy will influence search results.

New features of PoolParty 5.1 include:

  • Several updates of 3rd party components used by the PoolParty server, e.g. update of Sesame to version 2.7.14 to gain full SPARQL 1.1 compatibility and provide additional RDF serialization formats (N-Quads, RDF/JSON)
  • GraphSearch per project based on the calculated corpora in corpus management. After successful calculation, a GraphSearch interface is available via a persistent URL, e.g. http://vocabulary.semantic-web.at/PoolParty/graphsearch/cocktails, which is the GraphSearch over a knowledge graph about Cocktails.
  • Unified URI management to support URI creation aligned for projects and custom schemes
  • Enterprise Security: Several measurments have been undertaken to provide highest enterprise security possible
  • Publishing of custom schemes: Similar to the linked data frontend for projects, a schema publishing functionality for custom schemes has been added. A human readable version of the scheme is displayed per default when accessing the schema URL in a browser. As an example, take a look at the Cocktail ontology.

PoolParty Custom Schema Publishing

Find the detailed Release Notes in our online documentationx

 

 

April 28 2015

09:36

Our semantic event recommendations

Business conference

Just a couple of years ago critics argued that the semantic approach in IT wouldn’t make the transformation from an inspiring academic discipline to a relevant business application. They were wrong! With the digitalization of business, the power of semantic solutions to handle Big Data became obvious.

Thanks to a dedicated global community of semantic technology experts, we can observe a rapid development of software solutions in this field. The progress is coupled to a fast growing number of corporations that are implementing semantic solutions to win insights from existing but unused data.
Knowledge transfer is extremely important in semantics. Let`s have a look on the community calendar for the upcoming months. We are looking forward to share our experiences and learn. Join us!

Check out the semantic technology event calender

April 27 2015

13:51

SWC’s Semantic Event Recommendations

Just a couple of years ago critics argued that the semantic approach in IT wouldn’t make the transformation from an inspiring academic discipline to a relevant business application. They were wrong! With the digitalization of business, the power of semantic solutions to handle Big Data became obvious.Thanks to a dedicated global community of semantic technology experts, we can observe a rapid development of software solutions in this field. The progress is coupled to a fast growing number of corporations that are implementing semantic solutions to win insights from existing but unused data.

Knowledge transfer is extremely important in semantics. Let`s have a look on the community calendar for the upcoming months. We are looking forward to share our experiences and learn. Join us!

>> Semantics technology event calendar

 

13:03

Bernhard Haslhofer is the new Chief Data Scientist at SWC

Bernhard Haslhofer about his motivation to work as advisor for Semantic Web Company

Being a researcher by training, it is my job to know the state of the art and to make significant and original contributions in my research field. Understanding and keeping at least in pace with technological developments is certainly challenging but also a major motivation for this job.

In the field of computer science it is common practice to validate and/or demonstrate novel techniques by writing papers and implementing software prototypes. Even though many of those prototypes offer innovative and novel features, they often remain hidden within the scientific community because of lacking long-term support or missing market knowledge and business skills. Turning research-driven innovation into products therefore requires innovative enterprises that can offer those complementary skills and are open to novel technological approaches.

I strongly believe that a tight cooperation between people from academia and industry brings mutual benefits for both sides: research-driven innovation for enterprises as well as valuable real-world feedback loop for academia.

In recent years, people at SWC have already demonstrated awareness and a high level of openness to novel ideas and developments in academia (e.g., Linked Data) and, above all, showed how those ideas can successfully be transformed into products and business. In my new role as Chief Data Scientist at SWC I am looking forward to further support research-driven innovation by questioning the status quo and identifying concrete steps to improve product features, with the overall goal of getting better in what we do.

 

Short Bio Bernhard Haslhofer

Dr. Bernhard Haslhofer is working as a Data Scientist at the Austrian Institute of Technology. His research interest lies in gaining insights from large-scale and connected datasets by applying machine learning, information retrieval, and network analytics techniques. Previously, Bernhard worked as post doctoral researcher and lecturer at Cornell University Information Science, and received a Ph.D. in Computer Science from University of Vienna. He has numerous Linked Data related publications, serves in several related program committees, and is a recipient of an EU Marie Curie Fellowship and several research awards.

April 21 2015

15:06

Thoughts on KOS (Part 2): Classifying Knowledge Organisation Systems

Traditional KOSs include a broad range of system types from term lists to classification systems and thesauri. These organization systems vary in functional purpose and semantic expressivity. Most of these traditional KOSs were developed in a print and library environment. They have been used to control the vocabulary used when indexing and searching a specific product, such as a bibliographic database, or when organizing a physical collection such as a library (Hodge et al. 2000).

KOS in the era of the Web

With the proliferation the World Wide Web new forms of knowledge organization principles emerged based on hypertextuality, modularity, decentralisation and protocol-based machine communication (Berners-Lee 1998). New forms of KOSs emerged like folksonomies, topic maps and knowledge graphs, also commonly and broadly referred to as ontologies[1].

With reference to Gruber’s (1993/1993a) classic definition:

“a common ontology defines the vocabulary with which queries and assertions are exchanged among agents” based on “ontological commitments to use the shared vocabulary in a coherent and consistent manner.”

From a technological perspective ontologies function as integration layer for semantically interlinked concepts with the purpose to improve the machine-readability of the underlying knowledge model. Ontologies leverage interoperability from a syntactic to a semantic level for the purpose of knowledge sharing. According to Hodge et al. (2003)

“semantic tools emphasize the ability of the computer to process the KOS against a body of text, rather than support the human indexer or trained searcher. These tools are intended for use in the broader, more uncontrolled context of the Web to support information discovery by a larger community of interest or by Web users in general.” (Hodge et al. 2003)

In other words ontologies are being considered valuable to classifying web information in that they aid in enhancing interoperability – bringing together resources from multiple sources (Saumure & Shiri 2008, p. 657).

Which KOS serves your needs?

Schaffert et al. (2005) introduce a model to classify ontologies balong their scope, acceptance and expressivity, as can be seen in the figure below.

kos

 

According to this model the design of KOSs has to take account of the user group (acceptance model), the nature and abstraction level of knowledge to be represented (model scope) and the adequate formalism to represent knowledge for specific intellectual purposes (level of expressiveness). Although the proposed classification leaves room for discussion, it can help to distinguish various KOSs from each other and gain a better insight into the architecture of functionally and semantically intertwined KOSs. This is especially important under conditions of interoperability.

[1] It must be critically noted that the inflationary usage of the term “ontology” often in neglect of its philosophical roots has not necessarily contributed to a clarification of the concept itself. A detailed discussion of this matter is beyond the scope of this post. In this paper the author refers to Gruber’s (1993a) definition of ontology as “an explicit specification of a conceptualization”, which is commonly being referred to in artificial intelligence research.

The next post will look at trends inknowledge organization before and after the emergence of the world wide web.

Go to the previous post:Thoughts on KOS (Part1): Getting to grips with “semantic” interoperability

References:

Gruber, Thomas R. (1993). Toward Principles for the Design of Ontologies Used for Knowledge Sharing. In International Journal Human-Computer Studies 43, pp. 907-928.

Gruber, Thomas R. (1993a). A translation approach to portable ontologies. In: Knowledge Acquisition, 5/2, pp. 199-220

Hodge, Gail (2000). Systems of Knowledge Organization for Digital Libraries: Beyond Traditional Authority Files. In: First Digital Library Federation electronic edition, September 2008. Originally published in trade paperback in the United States by the Digital Library Federation and the Council on Library and Information Resources, Washington, D.C., 2000

Hodge, Gail M.; Zeng, Marcia Lei; Soergel, Dagobert (2003). Building a Meaningful Web: From Traditional Knowledge Organization Systems to New Semantic Tools. In: Proceedings of the 2003 Joint Conference on Digital Libraries (JCDL’03), IEEE

Saumure, Kristie; Shiri, Ali (2008). Knowledge organization trends in library and information studies: a preliminary comparison of pre- and post-web eras. In: Journal of Information Science, 34/5, 2008, pp. 651–666

Schaffert, Sebastian; Gruber, Andreas; Westenthaler, Rupert (2005). A Semantic Wiki for Collaborative Knowledge Formation. In: Reich, Siegfried; Güntner, Georg; Pellegrini, Tassilo; Wahler, Alexander (Eds.). Semantic Content Engineering. Linz: Trauner, pp. 188-202

April 10 2015

12:44

Thoughts on KOS (Part1): Getting to grips with “semantic” interoperability

Enabling and managing interoperability at the data and the service level is one of the strategic key issues in networked knowledge organization systems (KOSs) and a growing issue in effective data management. But why do we need “semantic” interoperability and how can we achieve it?

Interoperability vs. Integration

The concept of (data) interoperability can best be understood in contrast to (data) integration. While integration refers to a process, where formerly distinct data sources and their representation models are being merged into one newly consolidated data source, the concept of interoperability is defined by a structural separation of knowledge sources and their representation models, but that allows connectivity and interactivity between these sources by deliberately defined overlaps in the representation model. Under circumstances of interoperability data sources are being designed to provide interfaces for connectivity to share and integrate data on top of a common data model, while leaving the original principles of data and knowledge representation intact. Thus, interoperability is an efficient means to improve and ease integration of data and knowledge sources.

Three levels of interoperability

When designing interoperable KOSs it is important to distinguish between structural, syntactic and semantic interoperability (Galinski 2006):

  • Structural interoperability is achieved by representing metadata using a shared data model like the Dublin Core Abstraction Model or RDF (Resource Description Framework).
  • Syntactic interoperability if achieved by serializing data in a shared mark-up language like XML, Turtle or N3.
  • Semantic interoperability is achieved by using a shared terminology or controlled vocabulary to label and classify metadata terms and relations.

Given the fact that metadata standards carry a lot of intrinsic legacy, it is sometimes very difficult to achieve interoperability at all three levels mentioned above. Metadata formats and models are historically grown, they are most of the time a result of community decision processes, often highly formalized for specific functional purposes and most of the time deliberately rigid and difficult to change. Hence it is important to have a clear understanding and documentation of the application profile of a metadata format as a precondition for enabling interoperability at all three levels mentioned above. Semantic Web standards do a really good job in this respect!!

In the next post, we will take a look at various KOSs and how they differ with trespect to expressivity, scope and target group.

April 09 2015

08:37

Transforming music data into a PoolParty project

Goal

For the Nolde project it was requested to build a knowledge graph, containing detailed information about the austrian music scene: artists, bands and their music releases. We decided to use PoolParty, since theses entities should be accessible in an editorial workflow. More details about the implementation will be provided in a later blog post.

In the first round I want to share my experiences with the mapping of music data into SKOS. Obviously, LinkedBrainz was the perfect source to collect and transform such data since this is available as RDF/NTriples dumps and even providing a SPARQL endpoint! LinkedBrainz data is modeled using the Music Ontology.

E.g. you can select all mo:MusicArtists with relation to Austria.

SELECT query

I imported LinkedBrainz dump files and imported them into a triple store, together with DBpedia dumps.

With two CONSTRUCT queries, I was able to collect the required data and transform it into SKOS, into a PoolParty compatible format:

Construct Artists

CONSTRUCT Artists#1

Screen Shot 2015-04-10 at 10.53.36

Every matching MusicArtist results in a SKOS concept. The foaf:name is mapped to skos:prefLabel (in German).

As you can see, I used Custom Schema features to provide self-describing metadata on top of pure SKOS features: a MusicBrainz link, a MusicBrainz Id, DBpedia link, homepage…

In addition you can see in the query that also data from DBpedia was collected. In case a owl:sameAs relationship to DBpedia exists, a possible abstract is retrieved. When a DBpedia abstract is available it is mapped to skos:definition.

Construct Releases (mo:SignalGroups) with relations to Artists

Screen Shot 2015-04-10 at 10.59.50

Screen Shot 2015-04-10 at 11.00.10

Similar to the Artists, a matching SignalGroup results in a SKOS Concept. A skos:related relationship is defined between an Artist and his Releases.

Outcome

The SPARQL construct queries provided ttl files that could by imported directly into PoolParty, resulting in a project, containing nearly 1,000 Artists and 10,000 Releases:

PoolParty thesaurus

April 01 2015

13:06

Bazinga! Minutes from the CKAN Association Steering Group – 1 April (no joke)

Readme.txt

The following minutes represent what the Steering Group discussed today but please remember its also just a meeting (context: no real work is ever done in a meeting). The objective is to discuss and assign actions when needed, to make decisions when needed and to generally align everyone in the various ways each member is already supporting the CKAN project. Reading between the lines of this update there are a few points to call out and make mention of.

  1. The Steering Group (SG) are renewed with energy and determination. While the last meeting might have been some time ago we have set ourselves the objective of meeting weekly (after next week) because it is clear that the CKAN project is advancing rapidly and support from the SG needs to align with the velocity of the project without any risk of holding it back. Let’s add some buzzwords and suggest that the SG is aiming to bootstrap the project and intersect on multiple vectors to achieve maximum lift via regular and meaningful engagement with its project stakeholders (Please don’t take that last sentence seriously).
  2. ‘Distill out a 1-3 pager’ in relation to the business plan means getting lean and putting focus on the most essential parts of the CKAN Association business plan. Long docs with much wordage are great in some situations but in the case of the CKAN project we have an avid community of exceptionally bright people who are fine with the key objectives, strategies and tactics put forward in the most succinct way possible.
  3. If there is to be an operating model for the SG then it will be this: Say what is going to get done. Get it done+. Let everyone know it is done.
  4. Some awesome questions are answered at the end of this post.

+ In some cases things might not actually get done but we will strive to do the best we can. Yes, we’ll be transparent with goals. Yes, we’ll be happy to take any and all feedback. Yes, we are working for the CKAN project and are ultimately governed via public peer review by the project’s community.

CKAN Association Steering Group Meeting 1 April 2015

  • Present: Ashley Casovan (Chair), Steven De Costa, Rufus Pollock (Secretary)
  • Apologies: Antonio Acuna

Minutes

  1. Steering Group Goals (next quarter)
    1. Announce more clearly existence and purpose of Steering Group
      1. steering group email alias: steering-group@ckan.org (goes to group)
    2. Announce objectives which are
      1. Finalise business plan (have now had out for consultation for some time)
        1. Distill out a 1-3 pager
        2. Finalise and announce to list
        3. hangout on air to announce
      2. Community meetings
        1. Technical team run one at least one (general) developer community meeting in next 2m
        2. At least one users community meeting in next 2m
  2. Responsibilities of the SG
    1. Like a board – see http://ckan.org/about/steering-group/
    2. Similarities to Drupal Board: job is to support the community in moving the project forward – self-determination
  3. Review Actions
    1. https://trello.com/b/D6zxiuFJ/ckan-association-steering-group – primarily business plan and response to questions [note this Trello board is private]
  4. CKAN Event at IODC
    1. CIOs and CTO – CKAN is part of national and regional infrastructure
    2. efficiency gains on open data
    3. https://github.com/ckan/ideas-and-roadmap/issues/120
    4. Technical capability
  5. Review of student position description
    1. Ashley to send out to SG members for comment
  6. Meeting schedule: SG will meet weekly (for present) every Thursday at 12 noon UK (for 30m)
  7. Publishing minutes from this meeting – will aim to send asap

Your questions answered

Q: Is the SG interested in increasing transparency of the SG meetings? How will this be achieved?

SG: Yes. This was discussed and we would like to propose the next meeting be run in two parts. One part will be closed to attend to some regular business of the group with regard to coordinating efforts. The second half will be broadcast as a Hangout on Air for people to watch. We’ll aim to collect questions ahead of the meetings and address them during the broadcast with further options for an active Q&A session from the audience.

Q: Have the SG determined whether members of the association are yet contributing funds, or developers to the project? What are they? What happens if members don’t?

SG: There is ongoing work in this area. Most members are contributing in-kind (not exclusively developers). We’d be happy to make the pledged contributions public via the members listing on CKAN.org. At this time it is an honour system with regard to meeting membership obligations that are provided in-kind. If a member is suspected of not providing the expected level of in-kind contribution then the Steering Group will investigate and consider appropriate actions upon conclusion of such investigations.

Q: How does the SG see its role with respect to providing direction for the project?

SG: Support the community of both technical stakeholders and users in ways which allow them to act in concert to move the project forward in the direction these stakeholders determine to be best for the project.

Q: How is the SG raising more funds, other than membership, to further fund development of CKAN?

SG: This is a question the Steering Group is working through currently. Our focus is on the Business Plan and putting strategic objectives down for all to see via that document. Grant applications and the coordination of requirements to meet the needs of a group of platform owners is also being considered. With the latter the proposed approach is to release an expression of interest for funding support against specific development activities. Those who highly value such activities would be asked to help contribute to a pool of funds that would then see the development work paid for.

March 31 2015

05:13

CKAN Association Steering Group – about to set sail!

boatThe CKAN Association Steering Group will be meeting in about 30 hours from now. I wanted to make sure we took the opportunity to ask for community questions regarding the CKAN project.

So, please comment here with any questions you might like discussed and/or answered by the steering group :)

This will be my first chance to catch up with everyone in the group so I will have lots of questions of my own. I’m also keen to provide updates on how I see things are going with regard to developing and extending the CKAN community and its reach with regard to communications activities. We have a modest starting point, so updates will be easy to provide. It would be great to get comments via this post on what more people would like to see. However, there are many action items incomplete from within the Community and Communications Team so I’ll also be reporting on that. We don’t yet have a list of CKAN vendors and this is clearly needed based on the number of CKAN Dev list requests regarding upgrade questions when planning a move to 2.3.

Some great positive indicators I see for the project are the number of people active on the CKAN Dev email list and the high volume of quality conversations that are taking place there. It appears the the 2.3 release has been the catalyst needed for a fantastic reinvestment (at least publicly) from both the regular technical team members and the wider community of awesome people doing amazing stuff within their own open data projects. I would like those on the steering group to recognise this change and actively work to support ###MOAR###!

As a new member of the steering group I should introduce myself. You can see the bio attached to this post but for a fresh video-cast of something I’m involved in within my local area you can also take a look at the Australian Open Knowledge Chapter Board meeting that was held earlier today. The video is embedded below. I do actually mention the work I’ve been doing within the CKAN association at some point so please excuse the ‘inception’-like self referential nature of all this.

The main message here is – steering group meeting in about 30 hours. Please comment on this post to amplify your voice within that forum.

Rock on! Steven

 

March 20 2015

12:05

Presenting public finance just got easier

mexico_ckan_openspending

CKAN 2.3 is out! The world-famous data handling software suite which powers data.gov, data.gov.uk and numerous other open data portals across the world has been significantly upgraded. How can this version open up new opportunities for existing and coming deployments? Read on.

One of the new features of this release is the ability to create extensions that get called before and after a new file is uploaded, updated, or deleted on a CKAN instance.

This may not sound like a major improvement  but it creates a lot of new opportunities. Now it’s possible to analyse the files (which are called resources in CKAN) and take them to new uses based on that analysis. To showcase how this works, Open Knowledge in collaboration with the Mexican government, the World Bank (via Partnership for Open Data), and the OpenSpending project have created a new CKAN extension which uses this new feature.

It’s actually two extensions. One, called ckanext-budgets listens for creation and updates of resources (i.e. files) in CKAN and when that happens the extension analyses the resource to see if it conforms to the data file part of the Budget Data Package specification. The budget data package specification is a relatively new specification for budget publications, designed for comparability, flexibility, and simplicity. It’s similar to data packages in that it provides metadata around simple tabular files, like a csv file. If the csv file (a resource in CKAN) conforms to the specification (i.e. the columns have the correct titles), then the extension automatically creates the Budget Data Package metadata based on the CKAN resource data and makes the complete Budget Data Package available.

It might sound very technical, but it really is very simple. You add or update a csv file resource in CKAN and it automatically checks if it contains budget data in order to publish it on a standardised form. In other words, CKAN can now automatically produce standardised budget resources which make integration with other systems a lot easier.

The second extension, called ckanext-openspending, shows how easy such an integration around standardised data is. The extension takes the published Budget Data Packages and automatically sends it to OpenSpending. From there OpenSpending does its own thing, analyses the data, aggregates it and makes it very easy to use for those who use OpenSpending’s visualisation library.

So thanks to a perhaps seemingly insignificant extension feature in CKAN 2.3, getting beautiful and understandable visualisations of budget spreadsheets is now only an upload to a CKAN instance away (and can only get easier as the two extensions improve).

To learn even more, see this report about the CKAN and OpenSpending integration efforts.

March 11 2015

20:18

If ‘Change’ had a favourite number…it would be 2.3

There’s something about the number 2.3. It just rolls off the tongue with such an easy rectitude. Western families reportedly average 2.3 children; there were 2.3 million Americans out of work when Barrack Obama took Office; Starbucks go through 2.3 million paper cups a year. But the 2.3 that resonates with me most is 2.3 billion. That was the world population in the late 1940’s, and growing. WWII was over and we were finally able to stand up, dust off the despair of war and Depression, bask in a renewed confidence in the future, and make a lot of babies. We were on the brink of something and what those babies didn’t know yet was that they would grow up to usher in a wave of unprecedented social, economic and technological change.

We are on the brink again. Open data is gaining momentum faster than the Baby Boomers are growing old  and it has the potential to steer that wave of change in all manner of directions. We’re ready for the next 2.3. Enter CKAN 2.3.

Here are some of the most exciting updates:

  • Completely refactored resource data visualizations, allowing multiple persistent views of the same data an interface to manage and configure them. Check the updated documentation to know more, and the “Changes and deprecations” section for migration details: http://docs.ckan.org/en/ckan-2.3/maintaining/data-viewer.html

  • Responsive design for the default theme, that allows nicer rendering across different devices

  • Improved DataStore filtering and full text search capabilities

  • Added new extension points to modify the DataStore behaviour

  • Simplified two-step dataset creation process

  • Ability for users to regenerate their own API keys

  • Changes on the authentication mechanism to allow more secure set-ups. See “Changes and deprecations” section for more details and “Troubleshooting” for migration instructions.

  • Better support for custom dataset types

  • Updated documentation theme, now clearer and responsive

If you are upgrading from a previous version, make sure to check the “Changes and deprecations” section in the CHANGELOG, specially regarding the authorization configuration and data visualizations.

To install the new version, follow the relevant instructions from the documentation depending on whether you are using a package or source install: http://docs.ckan.org/en/ckan-2.3/maintaining/installing/index.html

If you are upgrading an existing CKAN instance, follow the upgrade instructions: http://docs.ckan.org/en/ckan-2.3/maintaining/upgrading/index.html

We have also made available patch releases for the 2.0.x, 2.1.x and 2.2.x versions. It is important to apply these, as they contain important security and stability fixes. Patch releases are fully backwards compatible and really easy to install: http://docs.ckan.org/en/latest/maintaining/upgrading/upgrade-package-to-patch-release.html

Charting the CKAN boom.

The following graph charts population from 1800 to 2100 but we’re interested in the period from the mid-1940s when there was a marked boost in population growth.

World population estimates from 1800 to 2100

World population estimates from 1800 to 2100. Sourced from Wikipedia: http://en.wikipedia.org/wiki/World_population The growth from 2.3 Billion in the 1940s is the Boom!

With the recent release of CKAN 2.3 we’re expecting a similar boost in community contributions. To add your voice to the community and boost the profile of the CKAN project please share a picture on twitter and include the hashtag #WeAreCKAN.

cooltext115409351606537

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.