All posts by Julia Holze

The Diffbot Knowledge Graph and Extraction Tools

DBpedia Member Features – In the last few weeks, we gave DBpedia members the chance to present special products, tools and applications and share them with the community. We already published several posts in which DBpedia members provided unique insights. This week we will continue with Diffbot. They will present the Diffbot Knowledge Graph and various extraction tools. Have fun while reading!

by Diffbot

Diffbot’s mission to “structure the world’s knowledge” began with Automatic Extraction APIs meant to pull structured data from most pages on the public web by leveraging machine learning rather than hand-crafted rules.

More recently, Diffbot has emerged as one of only three Western entities to crawl a vast majority of the web, utilizing our Automatic Extraction APIs to make the world’s largest commercially-available Knowledge Graph.

A Knowledge Graph At The Scale Of The Web

The Diffbot Knowledge Graph is automatically constructed by crawling and extracting data from over 60 billion web pages. It currently represents over 10 billion entities and 1 trillion facts about People, Organizations, Products, Articles, Events, among others.

Users can access the Knowledge Graph programmatically through an API. Other ways to access the Knowledge Graph include a visual query interface and a range of integrations (e.g., Excel, Google Sheets, Tableau). 

Visually querying the web like a database


Whether you’re consuming Diffbot KG data in a visual “low code” way or programmatically, we’ve continually added features to our powerful query language (Diffbot Query Language, or DQL) to allow users to “query the web like a database.” 

Guilt-Free Public Web Data

Current use cases for Diffbot’s Knowledge Graph and web data extraction products run the gamut and include data enrichment; lead enrichment; market intelligence; global news monitoring; large-scale product data extraction for ecommerce and supply chain; sentiment analysis of articles, discussions, and products; and data for machine learning. For all of the billions of facts in Diffbot’s KG, data provenance is preserved with the original source (a public URL) of each fact.

Entities, Relationships, and Sentiment From Private Text Corpora 

The team of researchers at Diffbot has been developing new natural language processing techniques for years to improve their extraction and KG products. In October 2020, Diffbot made this technology commercially-available to all via the Natural Language API

Our Natural Language API Demo Parsing Text Input About Diffbot Founder, Mike Tung

Our Natural Language API pulls out entities, relationships/facts, categories and sentiment from free-form texts. This allows organizations to turn unstructured texts into structured knowledge graphs. 

Diffbot and DBpedia

In addition to extracting data from web pages, Diffbot’s Knowledge Graph compiles public web data from many structured sources. One important source of knowledge is DBpedia. Diffbot also contributes to DBpedia by providing access to our extraction and KG services and collaborating with researchers in the DBpedia community. For a recent collaboration between DBpedia and Diffbot, be sure to check out the Diffbot track in DBpedia’s Autumn Hackathon for 2020

A big thank you to Diffbot, especially Filipe Mesquita for presenting their innovative Knowledge Graph.  

Yours,

DBpedia Association

A year with DBpedia – Retrospective Part 2/2020

This is the final part of our journey through 2020. In the previous blog post we already presented DBpedia highlights, events and tutorials. Now we want to take a deeper look at the second half of 2020 and give an outlook for 2021.

DBpedia Autumn Hackathon and the KGiA Conference

From September 21st to October 1st, 2020 we organized the first Autumn Hackathon. We invited all community members to join and contribute to this new format. You had the chance to experience the latest technology provided by the DBpedia Association members. We hosted special member tracks, a Dutch National Knowledge Graph Track and a track to improve DBpedia. Results were presented at the final hackathon event on October 5, 2020. We uploaded all contributions on our Youtube channel. Many thanks for all your contributions and invested time!

The Knowledge Graphs in Action event

Chairs open the KGiA event on October 6, 2020.
Opening the KGiA event

The SEMANTiCS Onsite Conference 2020 had to be postponed till September 2021. To bridge the gap until 2021, we took the opportunity to organize the Knowledge Graphs in Action online track as a SEMANTiCS satellite event on October 6, 2020. This new online conference is a combination of two existing events: the DBpedia Community Meeting, which is regularly held as part of the SEMANTiCS, and the annual Spatial Linked Data conference organised by EuroSDR and the Platform Linked Data Netherlands. We glued it together and as a bonus we added a track about Geo-information Integration organized by EuroSDR. As special joint sessions we presented four keynote speakers. More than 130 knowledge graph enthusiasts joined the KGiA event and it was a great success for the organizing team. Do you miss the event? No problem! We uploaded all recorded sessions on the DBpedia youtube channel.

KnowConn Conference 2020

Our CEO, Sebastian Hellmann, gave the talk ‘DBpedia Databus – A platform to evolve knowledge and AI from versioned web files’ on December 2, 2020 at the KnowledgeConnexions Online Conference. It was a great success and we received a lot of positive and constructive feedback for the DBpedia Databus. If you missed his talk and looking for Sebastians slides, please check here: http://tinyurl.com/connexions-202

DBpedia Archivo – Call to improve the web of ontologies

Search bar to inspect an archived ontology - DBpedia Archivo
DBpedia Archivo

On December 7, 2020 we introduced the DBpedia Archivo – an augmented ontology archive and interface to implement FAIRer ontologies. Each ontology is rated with 4 stars measuring basic FAIR features. We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. Further infos on how to help us are available in a detailed post on our blog. 

Member features on the blog

At the beginning of November 2020 we started the member feature on the blog. We gave DBpedia members the chance to present special products, tools and applications. We published several posts in which DBpedia members, like Ontotext, GNOSS, the Semantic Web Company, TerminusDB or FinScience shared unique insights with the community. In the beginning of 2021 we will continue with interesting posts and presentations. Stay tuned!

We do hope we will meet you and some new faces during our events next year. The DBpedia Association wants to get to know you because DBpedia is a community effort and would not continue to develop, improve and grow without you. We plan to have meetings in 2021 at the Knowledge Graph Conference, the LDK conference in Zaragoza, Spain and the SEMANTiCS conference in Amsterdam, Netherlands.

Happy New Year to all of you! Stay safe and check Twitter, LinkedIn and our Website or subscribe to our Newsletter for the latest news and information.

Yours,

DBpedia Association

2020 – Oh What a Challenging Year

Can you believe it..? … thirteen years ago the first DBpedia dataset was released. Thirteen years of development, improvements and growth. Now more than 2,600 GByte of Data is uploaded on the DBpedia Databus. We want to take this as an opportunity to send out a big Thank you! to all contributors, developers, coders, hosters, funders, believers and DBpedia enthusiasts who made that possible. Thank you for your support!

In the upcoming Blog-Series, we like to take you on a retrospective tour through 2020, giving you insights into a year with DBpedia. We will highlight our past events and the development around the DBpedia dataset. 

A year with DBpedia and the DBpedia dataset – Retrospective Part 1

DBpedia Workshop colocated with LDAC2020

On June 19, 2020 we organized a DBpedia workshop co-located with the LDAC workshop series to exchange knowledge regarding new technologies and innovations in the fields of Linked Data and Semantic Web. Dimitris Kontokostas (diffbot, US) opened the meeting with his delightful keynote presentation ‘{RDF} Data quality assessment – connecting the pieces’. His presentation focused on defining data quality and identification of data quality issues. Following Dimitri’s keynote many community based presentations were held, enabling an exciting workshop day

Most Influential Scholars

DBpedia has become a high-impact, high-visibility project because of our foundation in excellent Knowledge Engineering as the pivot point between scientific methods, innovation and industrial-grade output. The drivers behind DBpedia are 6 out of the TOP 10 Most Influential Scholars in Knowledge Engineering and the C-level executives of our members. Check all details here: https://www.aminer.cn/ai2000/country/Germany 

DBpedia (dataset) and Google Summer of Code 2020

For the 9th year in a row, we were part of this incredible journey of young ambitious developers who joined us as an open source organization to work on a GSoC coding project all summer. With 45 project proposals, this GSoC edition marked a new record for DBpedia. Even though Covid-19 changed a lot in the world, it couldn’t shake GSoC. If you want to have deeper insights in our GSoC student’s work you can find their blogs and repos here: https://blog.dbpedia.org/2020/10/12/gsoc2020-recap/

DBpedia Tutorial Series 2020

Stack slide from the tutorial

During this year we organized three amazing tutorials in which more than 120 DBpedians took part. Over the last year, the DBpedia core team has consolidated a great amount of technology around DBpedia. These tutorials are target to developers (in particular of DBpedia Chapters) that wish to learn how to replicate local infrastructure such as loading and hosting an own SPARQL endpoint. A core focus was the new DBpedia Stack, which contains several dockerized applications that are automatically loading data from the DBpedia Databus. We will continue organizing more tutorials in 2021. Looking forward to meeting you online! In case you miss the DBpedia Tutorial series 2020, watch all videos here

In our upcoming Blog-Post after the holidays we will give you more insights in past events and technical achievements. We are now looking forward to the year 2021. The DBpedia team plans to have meetings at the Knowledge Graph Conference, the LDK conference in Zaragoza, Spain and the SEMANTiCS conference in Amsterdam, Netherlands. We wish you a merry Christmas and a happy New Year. In the meantime, stay tuned and visit our Twitter channel or subscribe to our DBpedia Newsletter.   

Yours DBpedia Association

FinScience: leveraging DBpedia tools for fintech applications

DBpedia Member Features – In the last few weeks, we gave DBpedia members the chance to present special products, tools and applications and share them with the community. We already published several posts in which DBpedia members provided unique insights. This week we will continue with FinScience. They will present their latest products, solutions and challenges. Have fun while reading!

by FinScience

A brief presentation of who we are

FinScience is an Italian data-driven fintech company founded in 2017 in Milan by Google’s former senior managers and Alternative Data experts, who have combined their digital and financial expertise. FinScience, thus, originates from this merger of the world of Finance and the world of Data Science.
The company leverages founders’ experiences concerning Data Governance, Data Modeling and Data Platforms solutions. These are further enriched through the tech role in the European Consortium SSIX (Horizon 2020 program) focused on the building of a Social Sentiment for financial purposes. FinScience applies proprietary AI-based technologies to combine financial data/insights with alternative data in order to generate new investment ideas, ESG scores and non-conventional lists of companies that can be included in investment products by financial operators.

The FinScience’s data analysis pipeline is strongly grounded on the DBpedia ontology: the greatest value, according to our experience, is given by the possibility to connect knowledge in different languages, to query automatically-extracted structured information and to have rather frequently updated models.

Products and solutions

FinScience daily retrieves content from the web. About 1.5 million web pages are visited every day on about 35.000 different domains. The content of these pages is extracted, interpreted and analysed via Natural Language Processing techniques to identify valuable information and sources. Thanks to the structured information based on the DBpedia ontology, we can apply our proprietary AI algorithms to suggest to our customers the right investment opportunities.Our products are mainly based on the integration of this purely digital data – we call it “alternative data”- with traditional sources coming from the world of finance and sustainability. We describe these products briefly:

  • FinScience Platform for traders​: it leverages the power of machine learning to help traders monitor specific companies, spot new trends in the financial market, give access to an high added-value selection of companies and themes.
  • ESG scoring​: we provide an assessment of corporate ESG performance, by combining internal data (traditional, self-disclosed data) with external ‘alternative’ data (stakeholder-generated data) in order to measure the gap between what the companies communicate and what is stakeholder perception related to corporate sustainability commitments.
  • Thematic selections of listed companies​ : we create Trend-Driven selections oriented towards innovative themes: our data, together with the analysis of financial specialists, contribute to the selection of a set of listed companies related to trending themes such as the Green New Deal, the 5G technology or new medtech applications.

FinScience and DBpedia

As mentioned before, FinScience is strongly grounded in the DBpedia ontology, since we employ Spotlight to perform Named Entity Recognition (NER), namely automatic annotation of entities in a text. The NER task is performed with a two step procedure. The first step consists in annotating the named entity of a text using ​ DBpedia Spotlight​. In particular, Spotlight links a mention in the text (that is identified by its name and its context within the text) to the DBpedia entity that maximizes the joint probability of occurrence of both. The model is pre-trained on texts extracted from Wikipedia. Note that each entity is represented by a link to a DBpedia page (see, e.g. ​ http://dbpedia.org/page/Eni​ ), a DBpedia type indicating the type of the entity according to ​ this​ ontology and other information.

Another interesting feature of this approach is that we have a one to one mapping of the italian and english entities (and in general any language supported by DBpedia), allowing us to have a unified representation of an entity in the two languages. We are able to obtain this kind of information by exploiting the potential of ​ DBpedia Virtuoso​, which allows us to access DBpedia dataset via SPARQL. By identifying the entities mentioned in the online content, we can understand which topics are mentioned and thus identify companies and trends that are spreading in the digital ecosystem as well as analyzing how they are related to each other.

Challenges and next steps

One of the toughest challenges for us is to find an optimal way to update the models used by DBpedia Spotlight. Every day new entities and concepts arise and we are willing to recognise them in the news we analyze. And that is not all. In addition to recognizing new concepts, we need to be able to track an entity through all the updated versions of the model. In this way, we will not only be able to identify entities, but we will also have evidence of when some concepts were first generated. And we will know how they have changed over time, regardless of the names that have been used to identify them.

We are strongly involved in the DBpedia community and we try to contribute with our know-how. Particularly FinScience will contribute on infrastructure and Dockerfiles as well as on finding issues on the new released project (for instance, ​wikistats-extractor​).

A big thank you to FinSciene for presenting their products, challenges and contribution to DBpedia.  

Yours,

DBpedia Association

DBpedia Archivo – Call to improve the web of ontologies

Dear all, 

We are proud to announce DBpedia Archivo – an augmented ontology archive and interface to implement FAIRer ontologies. Each ontology is rated with 4 stars measuring basic FAIR features. We discovered 890 ontologies reaching on average 1.95 out of 4 stars. Many of them have no or unclear licenses and have issues w.r.t. retrieval and parsing. 

DBpedia Archivo: Community action on individual ontologies

We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version – archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo.

The star rating is very basic and only requires fixing small things. However, the impact on technical and legal usability can be immense.

Community action on all ontologies (quality, FAIRness, conformity)

Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies.

  1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia’s CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 
  2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks. 

How does DBpedia Archivo work?

Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus.

Archivo’s mission

Archivo’s mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating.

– Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology.

– Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.

Please find the current paper about DBpedia Archivo here: https://svn.aksw.org/papers/2020/semantics_archivo/public.pdf 

Let’s all join together to make the web of ontologies more reliable and stable.

Yours,

Johannes Frey, Denis Streitmatter, Fabian Götz, Sebastian Hellmann and Natanael Arndt

PoolParty Semantic Suite: The Ideal Tool To Build And Manage Enterprise Knowledge Graphs

DBpedia Member Features – In the coming weeks, we will give DBpedia members the chance to present special products, tools and applications and share them with the community. We will publish several posts in which DBpedia members provide unique insights. This week the Semantic Web Company will present use cases for the PoolParty Semantic Suite. Have fun while reading!

by the Semantic Web Company

About 80 to 90 percent of the information companies generate is extremely diverse and unstructured — stored in text files, e-mails or similar documents, what makes it difficult to search and analyze. Knowledge graphs have become a well-known solution to this problem because they make it possible to extract information from text and link it to other data sources, whether structured or not. However, building a knowledge graph at enterprise scale can be challenging and time-consuming.

PoolParty Semantic Suite is the most complete and secure semantic platform on the global market. It is also the ideal tool to help companies build and manage Enterprise Knowledge Graphs. With PoolParty in place, you will have no problems extracting value from large amounts of heterogeneous data, no matter if it’s stored in a relational database or in text files. The platform provides comprehensive tools for the management of enterprise knowledge graphs along the entire life cycle. Here is a list of the main use cases for the PoolParty Semantic Suite:

Data linking and enrichment

Driven by the Linked Data initiative, increasing amounts of viable data sets about various topics have been published on the Semantic Web. PoolParty allows users to use these online resources, amongst them DBPedia, to easily and quickly enrich a thesaurus with more data.

Search and recommender engines

Arrive at enriched and in-depth search results that provide relevant facts and contextualized answers to your specific questions, rather than a broad search result with many (ir)relevant documents and messages – but no valuable input. PoolParty Semantic Suite can be used to implement semantic search and recommendations that are relevant to your users.

Text Mining and Auto Tagging

Manually tagging an entire database is very time-consuming and often leads to inconsistent search results. PoolParty’s graph-based text mining can improve this process making it faster, consistent and precise. This is achieved by using advanced text mining algorithms and Natural Language Processing to automatically extract relevant entities, terms and other metadata from text and documents, helping drive in-depth text analytics.

Data Integration and Data Fabric

The Semantic Data Fabric is a new solution to data silos that combines the best-of-breed technologies, data catalogs and knowledge graphs, based on Semantic AI. With a semantic data fabric, companies can combine text and documents (unstructured) with data residing in relational databases and data warehouses (structured) to create a comprehensive view of their customers, employees, products, and other vital areas of business.

Taxonomies, Ontologies and Knowledge Graphs That Scale

With release 8.0 of the PoolParty Semantic Suite, users have even more options to conveniently generate, edit, and use knowledge graphs. In addition, the powerful and performant GraphDB by Ontotext has been added as PoolParty’s recommended embedded store and it is shipped as an add-on module. GraphDB is an enterprise-level graph database with state-of-the-art performance, scalability and security. This provides greater robustness to PoolParty and allows you to work with much larger taxonomies effectively.

A big thank you to the Semantic Web Company presenting use cases for the PoolParty Semantic Suite. 

Yours,

DBpedia Association


TerminusDB and DBpedia

DBpedia Member Features – In the coming weeks, we will give DBpedia members the chance to present special products, tools and applications and share them with the community. We will publish several posts in which DBpedia members provide unique insights. This week TerminusDB will show you how to use TerminusDB’s unique collaborative features to access DBpedia data. Have fun while reading!

by Luke Feeney from TerminusDB

This post introduces TerminusDB as a member of the DBpedia Association – proudly supporting the important work of DBpedia. It will also show you how to use TerminusDB’s unique collaborative features to access DBpedia data.

TerminusDB – an Open Source Knowledge Graph

TerminusDB is an open-source knowledge graph database that provides reliable, private & efficient revision control & collaboration. If you want to collaborate with colleagues or build data-intensive applications, nothing will make you more productive.

TerminusDB provides the full suite of revision control features and TerminusHub allows users to manage access to databases and collaboratively work on shared resources.

  • Flexible data storage, sharing, and versioning capabilities
  • Collaboration for your team or integrated in your app
  • Work locally then sync when you push your changes
  • Easy querying, cleaning, and visualization
  • Integrate powerful version control and collaboration for your enterprise and individual customers.

The TerminusDB project originated in Trinity College Dublin in Ireland in 2015. From its earliest origins, TerminusDB worked with DBpedia through the ALIGNED project, which was a research project funded by Horizon 2020 that focused on building quality-centric software for data management.

ALIGNED Project with early TerminusDB (then called ‘Dacura’) and DBpedia


While working on this project and especially our work building the architecture behind Seshat: The Global History Databank, we needed a solution that could enable collaboration among a highly distributed team on a shared database whose primary function was the curation of high-quality datasets with a very rich structure. While the scale of data was not particularly large, the complexity was extremely high. Unfortunately, the linked-data and RDF toolchains was severely lacking – we evaluated several tools in an attempt to architect a solution; however, in the end we were forced to build an end-to-end ourselves.

Evolution of TerminusDB

In general, we think that computers are fantastic things because they allow you to leverage much more evidence when making decisions than would otherwise be possible. It is possible to write computer programs that automate the ingestion and analysis of unimaginably large quantities of data.

If the data is well chosen, it is almost always the case that computational analysis reveals new and surprising insights simply because it incorporates more evidence than could possibly be captured by a human brain. And because the universe is chaotic and there are combinatorial explosions of possibilities all over the place, evidence is always better than intuition when seeking insight.

As anybody who has grappled with computers and large quantities of data will know, it’s not as simple as that. Computers should be able to do most of this for us. It makes no sense that we are still writing the same simple and tedious data validation and transformation programs over and over ad infinitum. There must be a better way.

This is the problem that we set out to solve with TerminusDB. We identified two indispensable characteristics that were lacking in data management tools:

  1. A rich and universally machine-interpretable modelling language. If we want computers to be able to transform data between different representations automatically, they need to be able to describe their data models to one another.
  2. Effective revision control. Revision control technologies have been instrumental in turning software production from a craft to an engineering discipline because they make collaboration and coordination between large groups much more fault tolerant. The need for such capabilities is obvious when dealing with data – where the existence of multiple versions of the same underlying dataset is almost ubiquitous and with only the most primitive tool support.

TerminusDB and DBpedia

Team TerminusDB took part in the DBpedia Autumn Hackathon 2020. As you know, DBpedia is an extract of the structured data from Wikipedia.

Our Hackathon Project Board

You can read all about our DBpedia Autumn Hackathon adventures in this blog post.

Open Source

Unlike many systems in the graph database world, TerminusDB is committed to open source. We believe in the principals of open source, open data and open science. We welcome all those data people that want to contribute to the general good of the world. This is very much in alignment with the DBpedia Association and community.

DBpedia on TerminusHub

TerminusHub is the collaborative point between TerminusDBs. You can push data to you colleagues and collaborators, you can pull updates (efficiently – just the diffs) and you can clone databases that are made available on the Hub (by the TerminusDB team or by others). Think of it as GitHub, but for data.

The DBpedia database is available on TerminusHub. You can clone the full DB in a couple of minutes (depending on your internet connection of course) and get querying. TerminusDB uses succinct data structures to compress everything so it makes sharing large database feasible – more technical detail here: https://github.com/terminusdb/terminusdb/blob/dev/docs/whitepaper/terminusdb.pdf for interested parties.

TerminusDB in the DBpedia Association

We will contribute to DBpedia by working to improve the quality of data available, by introducing new datasets that can be integrated with DBpedia, and by participating fully in the community.

We are looking forward to a bright future together.

A big thank you to Luke and TerminusDB presenting how TerminusDB works and how they would like to work with DBpedia in the future.

Yours,

DBpedia Association

Ontotext GraphDB on DBpedia

DBpedia Member Features – In the coming weeks we will give DBpedia members the chance to present special products, tools and applications and share them with the community. We will publish several posts in which DBpedia members provide unique insights. Ontotext will start with the GraphDB database. Have fun while reading!

 by Milen Yankulov from Ontotext

GraphDB is a family of highly efficient, robust, and scalable RDF databases. It streamlines the load and use of linked data cloud datasets, as well as your own resources. For easy use and compatibility with the industry standards, GraphDB implements the RDF4J framework interfaces, the W3C SPARQL Protocol specification, and supports all RDF serialization formats. The database offers open source API and it is the preferred choice of both small independent developers and big enterprise organizations because of its community and commercial support, as well as excellent enterprise features such as cluster support and integration with external high-performance search applications – Lucene, Solr, and Elasticsearch. GraphDB is build 100% on Java in order to be OS Platform independent.

GraphDB is one of the few triplestores that can perform semantic inferencing at scale, allowing users to derive new semantic facts from existing facts. It handles massive loads, queries, and inferencing in real-time.

GDB Architecture

GraphDB Workbench

Workbench is the GraphDB web-based administration tool. The user interface is similar to the RDF4J Workbench Web Application, but with more functionality.

GraphDB Engine

The GraphDB Workbench REST API can be used for managing locations and repositories programmatically, as well as managing a GraphDB cluster.  It includes connecting to remote GraphDB instances (locations), activating a location, and different ways for creating a repository.

It includes also connecting workers to masters, connecting masters to each other, as well monitoring the state of a cluster.

GraphQL access via Ontotext Platform 3

GraphDB enables Knowledge Graph access and updates via GraphQL. GraphDB is extended to support the efficient processing of GraphQL queries and mutations to avoid the N+1 translation of nested objects to SPARQL queries.

Ontotext offers three editions of GraphDB: Free, Standard, and Enterprise.

Free – commercial, file-based, sameAs & query optimizations, scales to tens of billions of RDF statements on a single server with a limit of two concurrent queries.

Standard Edition (SE) – commercial, file-based, sameAs & query optimizations, scales to tens of billions of RDF statements on a single server and an unlimited number of concurrent queries.

Enterprise Edition (EE) – high-availability cluster with worker and master database implementation for resilience and high-performance parallel query answering.

Why GraphDB is preferred choice of many data architects and data ops?

3 Reasons:

1. High Availability Cluster Architecture

GraphDB offers you a high-performance cluster proven to scale in production environments. It supports 

  • (1) coordinating all read and write operations, 
  • (2) ensuring that all worker nodes are synchronized,
  • (3) propagating updates (insert and delete tasks) across all workers and checking updates for inconsistencies, 
  • (4) load balancing read requests between all available worker nodes

Improved resilience

failover, dynamic configuration

Improved query bandwidth

larger cluster means more queries per unit time

Deployable across multiple data centres

Elastic scaling in cloud environments

Integration with search engines

Cluster Management and Monitoring

It supports

(1) automatic cluster reconfiguration in the event of failure of one or more worker nodes, 

(2) a smart client supporting multiple endpoints.

2. Easy Setup

GraphDB is 100% Java based in order to be Platform Independent. It is available through Native Installation Packages or Open Maven. It supports also Puppet and could be Dockerized. GraphDB is Cloud agnostic – It could be deployd on AWS, Azure, Google Cloud, etc.

3. Support

Based on the Edition you are using you could use the Community Support (StackOverFlow monitoring)

Ontotext has its Dedicated Support Team tha could assist through Customized Runbooks, Easy Slack communication, Jira Issue-Tracking System 

A big thank you to Ontotext for providing some insights into their product and database.

Yours,

DBpedia Association

More than 130 knowledge graph enthusiasts joined the KGiA event.

Opening the KG in Action event

The SEMANTiCS Onsite Conference 2020 had to be postponed till September 2021. To bridge the gap until 2021, we took this opportunity to organize the Knowledge Graphs in Action (KGiA) online track as a SEMANTiCS satellite event on October 6, 2020. This new online conference is a combination of two existing events: the DBpedia Community Meeting and the annual Spatial Linked Data conference organised by EuroSDR and the Platform Linked Data Netherlands. We combined the best of both and as a bonus we added a track about Geo-information Integration organized by EuroSDR. As special joint sessions we presented four keynote speakers. 

First and foremost, we would like to thank the SEMANTiCS, EuroSDR and Platform Linked Data Netherlands for organizing the KGiA online event and many thanks to all chairs who supported the conference.

Following, we will give you a brief retrospective about the keynote presentations and talks.

Opening & Keynote #1

The Knowledge Graphs in Action conference was opened with a keynote presentation ‘Data Infrastructure for Energy System Models’ by Carsten Hoyer-Klick (German Aerospace Center). He presented LOD GEOSS, a project for the development of a distributed data infrastructure for the analysis of energy systems. The project is about the development of networked database concepts based on the ideas of linked open data and the semantic web for input and output data of energy system models in energy systems analysis. Afterwards the conference chairs offered three parallel sessions in the morning. 

Morning Sessions 

Session 1: Spatial Linked Data Country Update

In this session 7 speakers presented the uptake and latest progress of Spatial Linked Data adoption in European countries, either within national mapping agencies or beyond.

Session 2: VGI country presentations

There is an increasing use of crowdsourced geo-information (CGI) in spatial data applications by National Mapping and Cadastral Agencies (NMCAs). Applications range from using CGI for supporting the actualisation of spatial data to adding extra content, such as land use, building entrances, road barriers, sensors placed in the public space and many more. This session hosted five presentations from NMCAs showing the status of their CGI integration in mapping applications and processes.

Session 3: DBpedia Member presentations

Members of the DBpedia Association presented their latest tools, applications and technical developments in this session. Filipe Mesquita (Diffbot) opened the member session with his talk ‘Beyond Human Curation: How Diffbot Is Building A Knowledge Graph of the Web’. Also ImageSnippets, timbr.ai and GNOSS gave interesting and delightful talks about their technical developments. Vassil Momtchev from Ontotext closed the session by giving insights into the GraphDB 9.4.   

For further details of the presentations follow the links to the slides on the event page.

Afternoon Sessions 

Keynote #2

The afternoon sessions started with an interesting keynote by Peter Mooney (Maynooth University). He talked about the opportunities for a more integrated approach to Geo-information integration. 

Dutch National Graph as a Digital Twin

After the second keynote Sebastian Hellmann, the CEO of the DBpedia Association, presented the development and methodology of the National Knowledge Graph for the Netherlands. In cooperation with Dutch partners, DBpedia invested two months to develop this new knowledge graph. His insightful presentation was followed by Benedicte Bucher (University Gustave Eiffel) talking about ‘Knowledge Graph on spatial digital assets in European’. She also presented the EuroSDR LDG initiative in many details.      

Afternoon Parallel Sessions

Session 4: Transforming Linked Data into a networked data economy – DBpedia Chapter Session

In the DBpedia Chapter Session, members of different European DBpedia chapters gave an overview about the data landscape in their countries. They presented identified business opportunities and important challenges, such as automated clearance of licenses in their countries. Enno Meijers (National Library of the Netherlands) summarized the data landscape in the Netherlands. There were also presentations about the data landscape in Brazil, Spain, Austria and Poland.   

Session 5: EuroSDR VGI data wrangling

This session intends to uncover new combinations and integration of CGI data with data from NMCAs which demonstrate the added value for map creation and map usage. Data wrangling (the process of creating small reproducible data processing workflows) is deployed for this work by using and combining existing geospatial software (desktop, web and mobile). In this session the results of the data wrangling process were presented. 

Session 6: Spatial Session

In this session, two speakers presented how they built knowledge graphs, and in the second part three presenters gave insights into tooling and presented the state of the art on working with Linked Data.

For further details of the presentations follow the links to the slides on the event page.

Keynote #3 and #4

Keynote #3 ‘Spatial Knowledge in Action – Deep semantics, geospatial thinking, and new cartographies’ was given by Marinos Kavouras (National Technical University of Athens). Marinos stated that the power of maps and modern cartographic language proves to have a new role for society at large, as an indispensable communication and cognitive tool. The KG in Action conference ended with the keynote presentation ‘Know, Know Where, KnowWhereGraph’ by Krzysztof Janowicz (University of California). During his live talk from California, Krzysztof provided an overview of ideas and hopes for creating geo-specific knowledge graphs and geo-enrichment services on top of this graph to address some of the aforementioned challenges.

In case you missed the event, all slides and presentations are also available on the DBpeda website. We will upload all recordings on the DBpedia youtube channel. Further insights, feedback and photos about the event are available on Twitter (#KGiA hashtag).

We are now looking forward to 2021. We plan to have meetings at the Knowledge Graph Conference and the SEMANTiCS conference in Amsterdam. Stay safe and check Twitter, LinkedIn and our Website or subscribe to our Newsletter for the latest news and information.

Yours,

DBpedia Association

GSoC 2020 recap

With 45 project proposals, this GSoC edition marked a new record for DBpedia.

GSoc and DBpedia Sticker

Oh, what a year! For the 9th year in a row, we were part of this incredible journey of young ambitious developers who joined us as an open source organization to work on a GSoC coding project all summer. 

Each year has brought us new project ideas, many amazing students and mostly great project results that shaped the future of DBpedia. 

Even though Covid-19 changed a lot in the world, it couldn’t shake GSoC much. The program, designed to mentor youngsters from afar is almost too perfect for the current world situation. One of the advantages of Google Summer of Code is, especially in times like these, the chance to work on projects remotely, but still obtain a first deep dive into Open Source projects like us – DBpedia. 

Meet the students and their projects

This year, we had notably more applications than in the previous ones. With 45 project proposals, this GSoC edition marked a new record for DBpedia. Throughout the summer program, our seven finalists worked intensely on their challenging DBpedia projects with great outcomes to show to the public. Projects ranged from extending our DBpedia extraction framework to a DBpedia Database project as well as to an online tool to generate RDF from DBpedia abstracts. If you want to have deeper insights into our GSoC student’s work you can find their blogs and repos in the following list. Check them out! 

Thanks to all our mentors around the world for joining us in this endeavour, for mentoring with kindness and technical expertise. A huge shout out to those who have been by our side for so many years in a row. Many thanks to Tommaso Soru, Beyza Yaman, Diego Moussalem, Edgard Marx, Mariano Rico, Thiago Castro Ferreira, Luca Virgili as well as Sebastian Hellmann, Stuart Chan, Amandeep Srivastava, Julio Hernandez and Jan Forberg. 

Mentor Summit

During the previous years you might have noticed that we always organized a little lottery to decide which mentor or organization admin can join the annual GSoC mentor summit. As this year’s event will be held online, space is not limited to 300 something mentors but is open to all organization admins and mentors alike. The GSoC Virtual Mentor Summit takes place October 15- 16, 2020 and this year we hope all our mentors will find the time to join and exchange with fellow mentors from around dozens of open source projects. 

After GSoC is before the next GSoC

We can not wait for the 2021 edition. Likewise, if you are an ambitious student who is interested in open source development and working with DBpedia you are more than welcome to either contribute your own project idea or apply for project ideas we offer starting in early 2021.

In case you like to mentor a project do not hesitate to also get in touch with us via dbpedia@infai.org

Stay tuned, frequently check Twitter, LinkedIn or the DBpedia Forum to stay in touch and don’t miss your chance of becoming a crucial force in this endeavour as well as a vital member of the DBpedia community.

See you soon,

yours

DBpedia Association