The Diffbot Knowledge Graph and Extraction Tools

DBpedia Member Features – In the last few weeks, we gave DBpedia members the chance to present special products, tools and applications and share them with the community. We already published several posts in which DBpedia members provided unique insights. This week we will continue with Diffbot. They will present the Diffbot Knowledge Graph and various extraction tools. Have fun while reading!

by Diffbot

Diffbot’s mission to “structure the world’s knowledge” began with Automatic Extraction APIs meant to pull structured data from most pages on the public web by leveraging machine learning rather than hand-crafted rules.

More recently, Diffbot has emerged as one of only three Western entities to crawl a vast majority of the web, utilizing our Automatic Extraction APIs to make the world’s largest commercially-available Knowledge Graph.

A Knowledge Graph At The Scale Of The Web

The Diffbot Knowledge Graph is automatically constructed by crawling and extracting data from over 60 billion web pages. It currently represents over 10 billion entities and 1 trillion facts about People, Organizations, Products, Articles, Events, among others.

Users can access the Knowledge Graph programmatically through an API. Other ways to access the Knowledge Graph include a visual query interface and a range of integrations (e.g., Excel, Google Sheets, Tableau). 

Visually querying the web like a database


Whether you’re consuming Diffbot KG data in a visual “low code” way or programmatically, we’ve continually added features to our powerful query language (Diffbot Query Language, or DQL) to allow users to “query the web like a database.” 

Guilt-Free Public Web Data

Current use cases for Diffbot’s Knowledge Graph and web data extraction products run the gamut and include data enrichment; lead enrichment; market intelligence; global news monitoring; large-scale product data extraction for ecommerce and supply chain; sentiment analysis of articles, discussions, and products; and data for machine learning. For all of the billions of facts in Diffbot’s KG, data provenance is preserved with the original source (a public URL) of each fact.

Entities, Relationships, and Sentiment From Private Text Corpora 

The team of researchers at Diffbot has been developing new natural language processing techniques for years to improve their extraction and KG products. In October 2020, Diffbot made this technology commercially-available to all via the Natural Language API

Our Natural Language API Demo Parsing Text Input About Diffbot Founder, Mike Tung

Our Natural Language API pulls out entities, relationships/facts, categories and sentiment from free-form texts. This allows organizations to turn unstructured texts into structured knowledge graphs. 

Diffbot and DBpedia

In addition to extracting data from web pages, Diffbot’s Knowledge Graph compiles public web data from many structured sources. One important source of knowledge is DBpedia. Diffbot also contributes to DBpedia by providing access to our extraction and KG services and collaborating with researchers in the DBpedia community. For a recent collaboration between DBpedia and Diffbot, be sure to check out the Diffbot track in DBpedia’s Autumn Hackathon for 2020

A big thank you to Diffbot, especially Filipe Mesquita for presenting their innovative Knowledge Graph.  

Yours,

DBpedia Association

A year with DBpedia – Retrospective Part 2/2020

This is the final part of our journey through 2020. In the previous blog post we already presented DBpedia highlights, events and tutorials. Now we want to take a deeper look at the second half of 2020 and give an outlook for 2021.

DBpedia Autumn Hackathon and the KGiA Conference

From September 21st to October 1st, 2020 we organized the first Autumn Hackathon. We invited all community members to join and contribute to this new format. You had the chance to experience the latest technology provided by the DBpedia Association members. We hosted special member tracks, a Dutch National Knowledge Graph Track and a track to improve DBpedia. Results were presented at the final hackathon event on October 5, 2020. We uploaded all contributions on our Youtube channel. Many thanks for all your contributions and invested time!

The Knowledge Graphs in Action event

Chairs open the KGiA event on October 6, 2020.
Opening the KGiA event

The SEMANTiCS Onsite Conference 2020 had to be postponed till September 2021. To bridge the gap until 2021, we took the opportunity to organize the Knowledge Graphs in Action online track as a SEMANTiCS satellite event on October 6, 2020. This new online conference is a combination of two existing events: the DBpedia Community Meeting, which is regularly held as part of the SEMANTiCS, and the annual Spatial Linked Data conference organised by EuroSDR and the Platform Linked Data Netherlands. We glued it together and as a bonus we added a track about Geo-information Integration organized by EuroSDR. As special joint sessions we presented four keynote speakers. More than 130 knowledge graph enthusiasts joined the KGiA event and it was a great success for the organizing team. Do you miss the event? No problem! We uploaded all recorded sessions on the DBpedia youtube channel.

KnowConn Conference 2020

Our CEO, Sebastian Hellmann, gave the talk ‘DBpedia Databus – A platform to evolve knowledge and AI from versioned web files’ on December 2, 2020 at the KnowledgeConnexions Online Conference. It was a great success and we received a lot of positive and constructive feedback for the DBpedia Databus. If you missed his talk and looking for Sebastians slides, please check here: http://tinyurl.com/connexions-202

DBpedia Archivo – Call to improve the web of ontologies

Search bar to inspect an archived ontology - DBpedia Archivo
DBpedia Archivo

On December 7, 2020 we introduced the DBpedia Archivo – an augmented ontology archive and interface to implement FAIRer ontologies. Each ontology is rated with 4 stars measuring basic FAIR features. We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. Further infos on how to help us are available in a detailed post on our blog. 

Member features on the blog

At the beginning of November 2020 we started the member feature on the blog. We gave DBpedia members the chance to present special products, tools and applications. We published several posts in which DBpedia members, like Ontotext, GNOSS, the Semantic Web Company, TerminusDB or FinScience shared unique insights with the community. In the beginning of 2021 we will continue with interesting posts and presentations. Stay tuned!

We do hope we will meet you and some new faces during our events next year. The DBpedia Association wants to get to know you because DBpedia is a community effort and would not continue to develop, improve and grow without you. We plan to have meetings in 2021 at the Knowledge Graph Conference, the LDK conference in Zaragoza, Spain and the SEMANTiCS conference in Amsterdam, Netherlands.

Happy New Year to all of you! Stay safe and check Twitter, LinkedIn and our Website or subscribe to our Newsletter for the latest news and information.

Yours,

DBpedia Association

2020 – Oh What a Challenging Year

Can you believe it..? … thirteen years ago the first DBpedia dataset was released. Thirteen years of development, improvements and growth. Now more than 2,600 GByte of Data is uploaded on the DBpedia Databus. We want to take this as an opportunity to send out a big Thank you! to all contributors, developers, coders, hosters, funders, believers and DBpedia enthusiasts who made that possible. Thank you for your support!

In the upcoming Blog-Series, we like to take you on a retrospective tour through 2020, giving you insights into a year with DBpedia. We will highlight our past events and the development around the DBpedia dataset. 

A year with DBpedia and the DBpedia dataset – Retrospective Part 1

DBpedia Workshop colocated with LDAC2020

On June 19, 2020 we organized a DBpedia workshop co-located with the LDAC workshop series to exchange knowledge regarding new technologies and innovations in the fields of Linked Data and Semantic Web. Dimitris Kontokostas (diffbot, US) opened the meeting with his delightful keynote presentation ‘{RDF} Data quality assessment – connecting the pieces’. His presentation focused on defining data quality and identification of data quality issues. Following Dimitri’s keynote many community based presentations were held, enabling an exciting workshop day

Most Influential Scholars

DBpedia has become a high-impact, high-visibility project because of our foundation in excellent Knowledge Engineering as the pivot point between scientific methods, innovation and industrial-grade output. The drivers behind DBpedia are 6 out of the TOP 10 Most Influential Scholars in Knowledge Engineering and the C-level executives of our members. Check all details here: https://www.aminer.cn/ai2000/country/Germany 

DBpedia (dataset) and Google Summer of Code 2020

For the 9th year in a row, we were part of this incredible journey of young ambitious developers who joined us as an open source organization to work on a GSoC coding project all summer. With 45 project proposals, this GSoC edition marked a new record for DBpedia. Even though Covid-19 changed a lot in the world, it couldn’t shake GSoC. If you want to have deeper insights in our GSoC student’s work you can find their blogs and repos here: https://blog.dbpedia.org/2020/10/12/gsoc2020-recap/

DBpedia Tutorial Series 2020

Stack slide from the tutorial

During this year we organized three amazing tutorials in which more than 120 DBpedians took part. Over the last year, the DBpedia core team has consolidated a great amount of technology around DBpedia. These tutorials are target to developers (in particular of DBpedia Chapters) that wish to learn how to replicate local infrastructure such as loading and hosting an own SPARQL endpoint. A core focus was the new DBpedia Stack, which contains several dockerized applications that are automatically loading data from the DBpedia Databus. We will continue organizing more tutorials in 2021. Looking forward to meeting you online! In case you miss the DBpedia Tutorial series 2020, watch all videos here

In our upcoming Blog-Post after the holidays we will give you more insights in past events and technical achievements. We are now looking forward to the year 2021. The DBpedia team plans to have meetings at the Knowledge Graph Conference, the LDK conference in Zaragoza, Spain and the SEMANTiCS conference in Amsterdam, Netherlands. We wish you a merry Christmas and a happy New Year. In the meantime, stay tuned and visit our Twitter channel or subscribe to our DBpedia Newsletter.   

Yours DBpedia Association

FinScience: leveraging DBpedia tools for fintech applications

DBpedia Member Features – In the last few weeks, we gave DBpedia members the chance to present special products, tools and applications and share them with the community. We already published several posts in which DBpedia members provided unique insights. This week we will continue with FinScience. They will present their latest products, solutions and challenges. Have fun while reading!

by FinScience

A brief presentation of who we are

FinScience is an Italian data-driven fintech company founded in 2017 in Milan by Google’s former senior managers and Alternative Data experts, who have combined their digital and financial expertise. FinScience, thus, originates from this merger of the world of Finance and the world of Data Science.
The company leverages founders’ experiences concerning Data Governance, Data Modeling and Data Platforms solutions. These are further enriched through the tech role in the European Consortium SSIX (Horizon 2020 program) focused on the building of a Social Sentiment for financial purposes. FinScience applies proprietary AI-based technologies to combine financial data/insights with alternative data in order to generate new investment ideas, ESG scores and non-conventional lists of companies that can be included in investment products by financial operators.

The FinScience’s data analysis pipeline is strongly grounded on the DBpedia ontology: the greatest value, according to our experience, is given by the possibility to connect knowledge in different languages, to query automatically-extracted structured information and to have rather frequently updated models.

Products and solutions

FinScience daily retrieves content from the web. About 1.5 million web pages are visited every day on about 35.000 different domains. The content of these pages is extracted, interpreted and analysed via Natural Language Processing techniques to identify valuable information and sources. Thanks to the structured information based on the DBpedia ontology, we can apply our proprietary AI algorithms to suggest to our customers the right investment opportunities.Our products are mainly based on the integration of this purely digital data – we call it “alternative data”- with traditional sources coming from the world of finance and sustainability. We describe these products briefly:

  • FinScience Platform for traders​: it leverages the power of machine learning to help traders monitor specific companies, spot new trends in the financial market, give access to an high added-value selection of companies and themes.
  • ESG scoring​: we provide an assessment of corporate ESG performance, by combining internal data (traditional, self-disclosed data) with external ‘alternative’ data (stakeholder-generated data) in order to measure the gap between what the companies communicate and what is stakeholder perception related to corporate sustainability commitments.
  • Thematic selections of listed companies​ : we create Trend-Driven selections oriented towards innovative themes: our data, together with the analysis of financial specialists, contribute to the selection of a set of listed companies related to trending themes such as the Green New Deal, the 5G technology or new medtech applications.

FinScience and DBpedia

As mentioned before, FinScience is strongly grounded in the DBpedia ontology, since we employ Spotlight to perform Named Entity Recognition (NER), namely automatic annotation of entities in a text. The NER task is performed with a two step procedure. The first step consists in annotating the named entity of a text using ​ DBpedia Spotlight​. In particular, Spotlight links a mention in the text (that is identified by its name and its context within the text) to the DBpedia entity that maximizes the joint probability of occurrence of both. The model is pre-trained on texts extracted from Wikipedia. Note that each entity is represented by a link to a DBpedia page (see, e.g. ​ http://dbpedia.org/page/Eni​ ), a DBpedia type indicating the type of the entity according to ​ this​ ontology and other information.

Another interesting feature of this approach is that we have a one to one mapping of the italian and english entities (and in general any language supported by DBpedia), allowing us to have a unified representation of an entity in the two languages. We are able to obtain this kind of information by exploiting the potential of ​ DBpedia Virtuoso​, which allows us to access DBpedia dataset via SPARQL. By identifying the entities mentioned in the online content, we can understand which topics are mentioned and thus identify companies and trends that are spreading in the digital ecosystem as well as analyzing how they are related to each other.

Challenges and next steps

One of the toughest challenges for us is to find an optimal way to update the models used by DBpedia Spotlight. Every day new entities and concepts arise and we are willing to recognise them in the news we analyze. And that is not all. In addition to recognizing new concepts, we need to be able to track an entity through all the updated versions of the model. In this way, we will not only be able to identify entities, but we will also have evidence of when some concepts were first generated. And we will know how they have changed over time, regardless of the names that have been used to identify them.

We are strongly involved in the DBpedia community and we try to contribute with our know-how. Particularly FinScience will contribute on infrastructure and Dockerfiles as well as on finding issues on the new released project (for instance, ​wikistats-extractor​).

A big thank you to FinSciene for presenting their products, challenges and contribution to DBpedia.  

Yours,

DBpedia Association

DBpedia Archivo – Call to improve the web of ontologies

Dear all, 

We are proud to announce DBpedia Archivo – an augmented ontology archive and interface to implement FAIRer ontologies. Each ontology is rated with 4 stars measuring basic FAIR features. We discovered 890 ontologies reaching on average 1.95 out of 4 stars. Many of them have no or unclear licenses and have issues w.r.t. retrieval and parsing. 

DBpedia Archivo: Community action on individual ontologies

We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version – archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo.

The star rating is very basic and only requires fixing small things. However, the impact on technical and legal usability can be immense.

Community action on all ontologies (quality, FAIRness, conformity)

Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies.

  1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia’s CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 
  2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks. 

How does DBpedia Archivo work?

Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus.

Archivo’s mission

Archivo’s mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating.

– Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology.

– Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.

Please find the current paper about DBpedia Archivo here: https://svn.aksw.org/papers/2020/semantics_archivo/public.pdf 

Let’s all join together to make the web of ontologies more reliable and stable.

Yours,

Johannes Frey, Denis Streitmatter, Fabian Götz, Sebastian Hellmann and Natanael Arndt

PoolParty Semantic Suite: The Ideal Tool To Build And Manage Enterprise Knowledge Graphs

DBpedia Member Features – In the coming weeks, we will give DBpedia members the chance to present special products, tools and applications and share them with the community. We will publish several posts in which DBpedia members provide unique insights. This week the Semantic Web Company will present use cases for the PoolParty Semantic Suite. Have fun while reading!

by the Semantic Web Company

About 80 to 90 percent of the information companies generate is extremely diverse and unstructured — stored in text files, e-mails or similar documents, what makes it difficult to search and analyze. Knowledge graphs have become a well-known solution to this problem because they make it possible to extract information from text and link it to other data sources, whether structured or not. However, building a knowledge graph at enterprise scale can be challenging and time-consuming.

PoolParty Semantic Suite is the most complete and secure semantic platform on the global market. It is also the ideal tool to help companies build and manage Enterprise Knowledge Graphs. With PoolParty in place, you will have no problems extracting value from large amounts of heterogeneous data, no matter if it’s stored in a relational database or in text files. The platform provides comprehensive tools for the management of enterprise knowledge graphs along the entire life cycle. Here is a list of the main use cases for the PoolParty Semantic Suite:

Data linking and enrichment

Driven by the Linked Data initiative, increasing amounts of viable data sets about various topics have been published on the Semantic Web. PoolParty allows users to use these online resources, amongst them DBPedia, to easily and quickly enrich a thesaurus with more data.

Search and recommender engines

Arrive at enriched and in-depth search results that provide relevant facts and contextualized answers to your specific questions, rather than a broad search result with many (ir)relevant documents and messages – but no valuable input. PoolParty Semantic Suite can be used to implement semantic search and recommendations that are relevant to your users.

Text Mining and Auto Tagging

Manually tagging an entire database is very time-consuming and often leads to inconsistent search results. PoolParty’s graph-based text mining can improve this process making it faster, consistent and precise. This is achieved by using advanced text mining algorithms and Natural Language Processing to automatically extract relevant entities, terms and other metadata from text and documents, helping drive in-depth text analytics.

Data Integration and Data Fabric

The Semantic Data Fabric is a new solution to data silos that combines the best-of-breed technologies, data catalogs and knowledge graphs, based on Semantic AI. With a semantic data fabric, companies can combine text and documents (unstructured) with data residing in relational databases and data warehouses (structured) to create a comprehensive view of their customers, employees, products, and other vital areas of business.

Taxonomies, Ontologies and Knowledge Graphs That Scale

With release 8.0 of the PoolParty Semantic Suite, users have even more options to conveniently generate, edit, and use knowledge graphs. In addition, the powerful and performant GraphDB by Ontotext has been added as PoolParty’s recommended embedded store and it is shipped as an add-on module. GraphDB is an enterprise-level graph database with state-of-the-art performance, scalability and security. This provides greater robustness to PoolParty and allows you to work with much larger taxonomies effectively.

A big thank you to the Semantic Web Company presenting use cases for the PoolParty Semantic Suite. 

Yours,

DBpedia Association


TerminusDB and DBpedia

DBpedia Member Features – In the coming weeks, we will give DBpedia members the chance to present special products, tools and applications and share them with the community. We will publish several posts in which DBpedia members provide unique insights. This week TerminusDB will show you how to use TerminusDB’s unique collaborative features to access DBpedia data. Have fun while reading!

by Luke Feeney from TerminusDB

This post introduces TerminusDB as a member of the DBpedia Association – proudly supporting the important work of DBpedia. It will also show you how to use TerminusDB’s unique collaborative features to access DBpedia data.

TerminusDB – an Open Source Knowledge Graph

TerminusDB is an open-source knowledge graph database that provides reliable, private & efficient revision control & collaboration. If you want to collaborate with colleagues or build data-intensive applications, nothing will make you more productive.

TerminusDB provides the full suite of revision control features and TerminusHub allows users to manage access to databases and collaboratively work on shared resources.

  • Flexible data storage, sharing, and versioning capabilities
  • Collaboration for your team or integrated in your app
  • Work locally then sync when you push your changes
  • Easy querying, cleaning, and visualization
  • Integrate powerful version control and collaboration for your enterprise and individual customers.

The TerminusDB project originated in Trinity College Dublin in Ireland in 2015. From its earliest origins, TerminusDB worked with DBpedia through the ALIGNED project, which was a research project funded by Horizon 2020 that focused on building quality-centric software for data management.

ALIGNED Project with early TerminusDB (then called ‘Dacura’) and DBpedia


While working on this project and especially our work building the architecture behind Seshat: The Global History Databank, we needed a solution that could enable collaboration among a highly distributed team on a shared database whose primary function was the curation of high-quality datasets with a very rich structure. While the scale of data was not particularly large, the complexity was extremely high. Unfortunately, the linked-data and RDF toolchains was severely lacking – we evaluated several tools in an attempt to architect a solution; however, in the end we were forced to build an end-to-end ourselves.

Evolution of TerminusDB

In general, we think that computers are fantastic things because they allow you to leverage much more evidence when making decisions than would otherwise be possible. It is possible to write computer programs that automate the ingestion and analysis of unimaginably large quantities of data.

If the data is well chosen, it is almost always the case that computational analysis reveals new and surprising insights simply because it incorporates more evidence than could possibly be captured by a human brain. And because the universe is chaotic and there are combinatorial explosions of possibilities all over the place, evidence is always better than intuition when seeking insight.

As anybody who has grappled with computers and large quantities of data will know, it’s not as simple as that. Computers should be able to do most of this for us. It makes no sense that we are still writing the same simple and tedious data validation and transformation programs over and over ad infinitum. There must be a better way.

This is the problem that we set out to solve with TerminusDB. We identified two indispensable characteristics that were lacking in data management tools:

  1. A rich and universally machine-interpretable modelling language. If we want computers to be able to transform data between different representations automatically, they need to be able to describe their data models to one another.
  2. Effective revision control. Revision control technologies have been instrumental in turning software production from a craft to an engineering discipline because they make collaboration and coordination between large groups much more fault tolerant. The need for such capabilities is obvious when dealing with data – where the existence of multiple versions of the same underlying dataset is almost ubiquitous and with only the most primitive tool support.

TerminusDB and DBpedia

Team TerminusDB took part in the DBpedia Autumn Hackathon 2020. As you know, DBpedia is an extract of the structured data from Wikipedia.

Our Hackathon Project Board

You can read all about our DBpedia Autumn Hackathon adventures in this blog post.

Open Source

Unlike many systems in the graph database world, TerminusDB is committed to open source. We believe in the principals of open source, open data and open science. We welcome all those data people that want to contribute to the general good of the world. This is very much in alignment with the DBpedia Association and community.

DBpedia on TerminusHub

TerminusHub is the collaborative point between TerminusDBs. You can push data to you colleagues and collaborators, you can pull updates (efficiently – just the diffs) and you can clone databases that are made available on the Hub (by the TerminusDB team or by others). Think of it as GitHub, but for data.

The DBpedia database is available on TerminusHub. You can clone the full DB in a couple of minutes (depending on your internet connection of course) and get querying. TerminusDB uses succinct data structures to compress everything so it makes sharing large database feasible – more technical detail here: https://github.com/terminusdb/terminusdb/blob/dev/docs/whitepaper/terminusdb.pdf for interested parties.

TerminusDB in the DBpedia Association

We will contribute to DBpedia by working to improve the quality of data available, by introducing new datasets that can be integrated with DBpedia, and by participating fully in the community.

We are looking forward to a bright future together.

A big thank you to Luke and TerminusDB presenting how TerminusDB works and how they would like to work with DBpedia in the future.

Yours,

DBpedia Association

GNOSS – How do we envision our future work with DBpedia

DBpedia Member Features – In the coming weeks, we will give DBpedia members the chance to present special products, tools and applications and share them with the community. We will publish several posts in which DBpedia members provide unique insights. This week GNOSS will give an overview of their products and business focus. Have fun while reading!

 by Irene Martínez and Susana López from GNOSS

GNOSS (https://www.gnoss.com/) is a Spanish technology manufacturing company that has developed its own platform for the construction and exploitation of knowledge graphs. GNOSS technology operates within the framework of the set of technologies that concur in the Artificial Intelligence Program semantically interpreted: NLU (Natural Language Understanding); identification, extraction, disambiguation and linking of entities; as well as the construction of interrogation and knowledge discovery systems based on inferences and on systems that emulate the forms of natural reasoning.

How is our business focus

The GNOSS project is positioned in the emerging market for Deep AI (Deep Understanding AI). By Deep AI we mean the convergence of symbolic AI and sub-symbolic AI.

GNOSS is the leading company in Spain in the construction of solutions aimed at the construction of knowledge ecosystems interpretable and queryable (interrogable) by machines and people, which integrate heterogeneous and distributed data represented by technical vocabularies and ontologies written in programming languages (OWL-RDF ) interpretable by machines, which are consolidated and exploited through knowledge graphs

The technology developed by GNOSS facilitates the construction, within the framework of the aforementioned ecosystems, of intelligent interrogation and search systems, information enrichment and context generation systems, advanced recommendation systems, predictive Business Intelligence systems based on dynamic visualizations and NLP/NLU systems.

GNOSS works in the cloud and is offered as a service. We have a complex and robust technological infrastructure designed to compute intelligent data in a framework that offers the maximum guarantee of security and best practices in technology services.

Products and Solutions

PRODUCTS

GNOSS Knowledge Graph Builder is a development platform upon which third parties can deploy their web projects, with a complete suite of components to build Knowledge Graphs and deploy an intelligent web semantically aware in record time. The platform enables the interrogation of a Knowledge Graph by both machines and people. The main modules of the platform are 1) Metadata and Knowledge Graph Construction and Management; 2)Discovery, reasoning and analysis through Knowledge Graphs; 3) Semantic Content Management. It also includes some configurable characteristics and functions for fast, agile and flexible adaptation and evolution of intelligent digital ecosystems

SOLUTIONS

Thanks to GNOSS Knowledge Graph Builder and GNOSS Sherlock Services, we have developed a suite of transversal solutions and some sectorial solutions based on the creation and exploitation of Knowledge Graphs.

The transversal solutions are: GNOSS Metadata Management Solution (for the integration of heterogeneous and distributed information into semantic data layer consolidating information into a knowledge graph), GNOSS Sherlock NLP-NLU Service (Intelligent software services for machines to understand us, based on natural language processing and on entity recognition and linking; and dynamic graphic visualizations), GNOSS Search Cloud (which includes intelligent information search, interrogation and retrieval systems; inferences; recommendations and generation of significant contexts), GNOSS Semantic BI&Analytics (expressive and dynamic Business Intelligence based on Smart Data).

We have developed sectorial solutions in Education and University, Tourism, Culture and Museums, Healthcare, Communication and MK, Banking, Public Administration; Catalogs and support to supply chain.

What significance does DBpedia for us

We think that the foundations for the construction of the great European Project of Symbolic AI are being created thanks to DBpedia and other Linked Open Data projects, by turning the internet into a Universal Knowledge Base, which works according to the principles and standards of Linked Open Data and Semantic Web. This knowledge base, as the brain of the internet, would be the basis of the IA of the future. In this context, we consider that DBpedia plays a central role as an open general knowledge base and, therefore, as the core of the European Project of Symbolic AI.

Currently, some projects developed with GNOSS platform are already using DBpedia to access a large amount of structured and ontologically-represented information, in order to link entities, enrich information and offer contextual information. Two examples of this are the ‘Augmented Reading’ of Museo del Prado in the descriptions of the artworks of the Museum Prado, and the Graph of related entities in Didactalia.net.

The ‘Augmented Reading’ of Museo del Prado in the descriptions of the artworks of the Museum (see for instance ‘The Family of Carlos IV’, by Francisco de Goya) recognizes and extracts the entities contained in them, thereby providing additional and contextual information about them, so that anyone who can read them without giving up understanding them in depth.

In Didactalia.net, for a given educational resource, its Graph of related entities works as a conceptual map of the resource to support the teacher and the student in the teaching-learning process (see for instance this resource about Descartes).

How do we envision our future work with DBpedia

GNOSS can contribute to DBpedia at different levels, from making suggestions for further development to participating in strategy and commercialization.

We could collaborate with DBpedia contributing to tests of the releases of DBpedia and giving our feedback of the use of DBpedia in projects applied to public and private organizations developed with GNOSS. Based on this, we could make suggestions for future work considering our experience and customer needs in this context.

We could participate in the strategy and commercialization, in order to gain more presence in sectors in which we work, such as healthcare, education, culture or communication, and to achieve that the private companies can appreciate and benefit from the great value that DBpedia can offer them.

A big thank you to GNOSS for presenting their product and envisioning how they would like to work with DBpedia in the future.

Yours,

DBpedia Association

Ontotext GraphDB on DBpedia

DBpedia Member Features – In the coming weeks we will give DBpedia members the chance to present special products, tools and applications and share them with the community. We will publish several posts in which DBpedia members provide unique insights. Ontotext will start with the GraphDB database. Have fun while reading!

 by Milen Yankulov from Ontotext

GraphDB is a family of highly efficient, robust, and scalable RDF databases. It streamlines the load and use of linked data cloud datasets, as well as your own resources. For easy use and compatibility with the industry standards, GraphDB implements the RDF4J framework interfaces, the W3C SPARQL Protocol specification, and supports all RDF serialization formats. The database offers open source API and it is the preferred choice of both small independent developers and big enterprise organizations because of its community and commercial support, as well as excellent enterprise features such as cluster support and integration with external high-performance search applications – Lucene, Solr, and Elasticsearch. GraphDB is build 100% on Java in order to be OS Platform independent.

GraphDB is one of the few triplestores that can perform semantic inferencing at scale, allowing users to derive new semantic facts from existing facts. It handles massive loads, queries, and inferencing in real-time.

GDB Architecture

GraphDB Workbench

Workbench is the GraphDB web-based administration tool. The user interface is similar to the RDF4J Workbench Web Application, but with more functionality.

GraphDB Engine

The GraphDB Workbench REST API can be used for managing locations and repositories programmatically, as well as managing a GraphDB cluster.  It includes connecting to remote GraphDB instances (locations), activating a location, and different ways for creating a repository.

It includes also connecting workers to masters, connecting masters to each other, as well monitoring the state of a cluster.

GraphQL access via Ontotext Platform 3

GraphDB enables Knowledge Graph access and updates via GraphQL. GraphDB is extended to support the efficient processing of GraphQL queries and mutations to avoid the N+1 translation of nested objects to SPARQL queries.

Ontotext offers three editions of GraphDB: Free, Standard, and Enterprise.

Free – commercial, file-based, sameAs & query optimizations, scales to tens of billions of RDF statements on a single server with a limit of two concurrent queries.

Standard Edition (SE) – commercial, file-based, sameAs & query optimizations, scales to tens of billions of RDF statements on a single server and an unlimited number of concurrent queries.

Enterprise Edition (EE) – high-availability cluster with worker and master database implementation for resilience and high-performance parallel query answering.

Why GraphDB is preferred choice of many data architects and data ops?

3 Reasons:

1. High Availability Cluster Architecture

GraphDB offers you a high-performance cluster proven to scale in production environments. It supports 

  • (1) coordinating all read and write operations, 
  • (2) ensuring that all worker nodes are synchronized,
  • (3) propagating updates (insert and delete tasks) across all workers and checking updates for inconsistencies, 
  • (4) load balancing read requests between all available worker nodes

Improved resilience

failover, dynamic configuration

Improved query bandwidth

larger cluster means more queries per unit time

Deployable across multiple data centres

Elastic scaling in cloud environments

Integration with search engines

Cluster Management and Monitoring

It supports

(1) automatic cluster reconfiguration in the event of failure of one or more worker nodes, 

(2) a smart client supporting multiple endpoints.

2. Easy Setup

GraphDB is 100% Java based in order to be Platform Independent. It is available through Native Installation Packages or Open Maven. It supports also Puppet and could be Dockerized. GraphDB is Cloud agnostic – It could be deployd on AWS, Azure, Google Cloud, etc.

3. Support

Based on the Edition you are using you could use the Community Support (StackOverFlow monitoring)

Ontotext has its Dedicated Support Team tha could assist through Customized Runbooks, Easy Slack communication, Jira Issue-Tracking System 

A big thank you to Ontotext for providing some insights into their product and database.

Yours,

DBpedia Association

New DBpedia Usage Report

Our partner Openlink recently published a new DBpedia usage report on the SPARQL endpoint and associated Linked Data deployment

Copyright © 2020 OpenLink Software

Introduction

Just recently, DBpedia Association member and hosting specialist, OpenLink released the DBpedia Usage report, a periodic report on the DBpedia SPARQL endpoint and associated Linked Data deployment.

The report not only gives some historical insight into DBpedia’s usage, number of visits and hits per day but especially shows statistics collected between July 2017 and September 2020, spanning more than 3 years of logs from the DBpedia web service operated by our partner OpenLink Software at http://dbpedia.org/sparql/.

Before we want to highlight a few aspects of DBpedia’s usage we would like to thank OpenLink for the continuous hosting of the DBpedia Endpoint and the creation of this report.

DBpedia Usage Report: Historical Overview

The first table shows the average numbers of Visits and Hits per day during the time each DBpedia dataset was live on the http://dbpedia.org/sparql endpoint. Similarly to the hits, we also see a huge increase in visits coinciding with the DBpedia 2015–10 release on April 1st, 2016.

Historic overview
Historical overview of visits and hits per day in the cause of the last 10 years.

This boost was attributed to an intensive promotion of DBpedia via community meetings, and exchange with various partners in the Linked Data community. In addition, our Social Media activity in the community increased backlinks. Since then, not only the numbers of hits rose but DBpedia also provided for better data quality. We are constantly working on improving accessibility, data quality and stability of the SPARQL endpoint.

Kudos to Open Link for maintaining the technical baseline for DBpedia.

The next graph shows the percentage of the total number of hits in a given time period that can be attributed to the /sparql endpoint. If we look at the historical data from 2014–09 onward, we can see the requests to /sparql were about 60.16% of the total number of hits.

DBpedia Usage Report: Current Statistics

If we focus on the last 12 months, we can see a slightly lower average of 48.10%, as shown in the graph below. This means that around 50% of traffic uses Linked Data constructions to view the information available through DBpedia. To put this into perspective, that means that of the average of 7.2 million hits to the endpoint on a given day, 3.6 million hits are Linked Data Deployment hits.

The following table shows the information on visits, sited and hits for
each month between September 2019 and 2020.

Statistical overview of the last year.
Overview of the last 12 months.

For detailed information on the specific usage numbers, please visit the original report by Openlink published here. Also, older reporst are available through their site.

Further Links

For the latest news, subscribe to the DBpedia Newsletter, check our DBpedia Website and follow us on Twitter, Facebook, and LinkedIn .

Thanks for reading and keep using DBpedia!

Yours DBpedia Associaton