Tag Archives: RDF

ImageSnippets and DBpedia

 by Margaret Warren 

The following post introduces to you ImageSnippets and how this tool profits from the use of DBpedia.

ImageSnippets – A Tool for Image Curation

For over two decades, ImageSnippets has been evolving as an ontology and data-driven framework for image annotation research. Representing the informal knowledge people have about the context and provenance of images as RDF/linked data is challenging, but it has also been an enlightening and engaging journey in not only applying formal semantic web theory to building image graphs but also to weave together our interests with what others have been doing in the field of semantic annotation and knowledge graph building over these many years. 

DBpedia provides the entities for our RDF descriptions

Since the beginning, we have always made use of DBpedia and other publicly available datasets to provide the entities for use in our RDF descriptions.  Though ImageSnippets can be used to build special vocabularies around niche domains, our primary research is around relation ontology building and we prefer to avoid the creation of new entities unless we absolutely can not find them through any other service.

When we first went live with our basic system in 2013, we began hand-building tens of thousands of triples using terms primarily from DBpedia (the core of the linked data cloud.) While there would often be an overlap of terms with other datasets – almost a case of too many choices – we formed a best practice of preferentially using DBpedia terms as often as possible, because they gave us the most utility for reasoning using the SKOS concepts built into the DBpedia service. We have also made extensive use of DBpedia Spotlight for named-entity extraction.

How to combine DBpedia & Wikidata and make it useful for ImageSnippets

But the addition of the Wikidata Query Service over the past 18 months or so has now given us an even more unique challenge: how to work with both! Since DBpedia and Wikidata both have class relationships that we can reason from, we found ourselves in a position to be able to examine both DBpedia and Wikidata in concert with each other through the use of mapping techniques between the two datasets.

How it works: ImageSnippets & DBpedia

When an image is saved, we build inference graphs over results from both DBpedia and Wikidata. These graphs can be revealed with simple SPARQL queries at our endpoint and queries from subclasses, taxons and SKOS concepts can find image results in our custom search tool.  We have also just recently added a pathfinder utility – highly useful for semantic explainability as it will return the precise path of connections from an originating source entity to the target entity that was used in our custom image search.

Sometimes a query will produce very unintuitive results, and the pathfinder tool enables us to quickly locate semantic errors which lead to clearly erroneous misclassifications (for example, a search for the Wikidata subclass of ‘communication medium’ reveals images of restaurants and hotels because of misclassifications in Wikidata.) In this way we can quickly troubleshoot the results of queries, using the images as visual cues to explore the accuracy of the semantic modelling in both datasets.


We are very excited with the new directions that we feel can come of our knitting together of the two knowledge graphs through the use of our visual interface and believe there is a great potential for ImageSnippets to serve a more complex role in cleaning and aligning the two datasets, using the images as our guides.

A big thank you to Margaret Warren for providing some insights into her work at ImageSnippets.

Yours,

DBpedia Association

Better late than never – GSOC 2019 recap & outlook GSoC 2020

  • Pinky: Gee, Brain, what are we gonna do this year?
  • Brain: The same thing we do every year, Pinky. Taking over GSoC.

And, this is exactly what we did. We had been accepted as one of 206 open source organizations to participate in Google Summer of Code (GSoC) again. More than 25 students followed our call for project ideas. In the end, we chose six amazing students and their project proposals to work with during summer 2019. 
In the following post, we will show you some insights into the project ideas and how they turned out. Additionally, we will shed some light onto our amazing team of mentors who devoted a lot of time and expertise in mentoring our students. 

Meet the students and their projects

A Neural QA Model for DBpedia by Anand Panchbhai

With booming amount of information being continuously added to the internet, organising the facts and serving this information to the users becomes a very difficult task. Currently, DBpedia hosts billions of data points and corresponding relations in the RDF format. Accessing data on DBpedia via a SPARQL query is difficult for amateur users, who do not know how to write a query. This project tried to make this humongous linked data available to a larger user base in their natural languages (now restricted to English). The primary objective of the project was to translate natural language questions to a valid SPARQL query. Click here if you want to check his final code.

Multilingual Neural RDF Verbalizer for DBpedia by Dwaraknath Gnaneshwar

Presently, the generation of Natural Language from RDF data has gained substantial attention and has also been proven to support the creation of Natural Language Generation benchmarks. However, most models are aimed at generating coherent sentences in English, while other languages have enjoyed comparatively less attention from researchers. RDF data is usually in the form of triples, <subject, predicate, object>. Subject denotes the resource, the predicate denotes traits or aspects of the resource and expresses the relationship between subject and object. In this project, we aimed to create a multilingual Neural Verbalizer, ie, generating high-quality natural-language text from sets of RDF triples in multiple languages using one stand-alone, end-to-end trainable model. You can follow up on the progress and outcome of the project here. 

Predicate Detection using Word Embeddings for Question Answering over Linked Data by Yajing Bian

Knowledge-based question-answering system (KBQA) has demonstrated an ability to generate answers to natural language from information stored in a large-scale knowledge base. Generally, it completes the analysis challenge via three steps: identifying named entities, detecting predicates and generate SPARQL queries. In these three steps, predicate detection identifies the KB relation(s) a question refers to. To build a predicate detection structure, we identified all possible named entity first, then collected all predicates corresponding to the above entities. What follows is to calculate the similarity between problem and candidate predicates using a multi-granularity neural network model (MGNN). To find the globally optimal entity-predicate assignment, we use a joint model which is based on the result of entity linking and predicate detection process rather than considering the local predictions (i.e. most possible entity or predicate) as the final result. More details on the project are available here

A tool to generate RDF triples from DBpedia abstract by  Jayakrishna Sahit

The main aim of this project was to research and develop a tool in order to generate highly trustable RDF triples from DBpedia abstracts. In order to develop such a tool, we implemented algorithms which would take the output generated from the syntactic analyzer along with DBpedia spotlight’s named entity identifiers. Further information and the project’s results can be found here

A transformer of Attention Mechanism for Long-context QA by Stuart Chan

In this GSoC project, I choose to employ the language model of the transformer with an attention mechanism to automatically discover query templates for the neural question-answering knowledge-based model. The ultimate goal was to train the attention-based NSpM model on DBpedia with its evaluation against the QALD benchmark. Check here for more details on the project.

Workflow for linking External datasets by Jaydeep Chakraborty

The requirement of the project was to create a workflow for entity linking between DBpedia and external data sets. We aimed at an approach for ontology alignment through the use of an unsupervised mixed neural network. We explored reading and parsing the ontology and extracted all necessary information about concepts and instances. Additionally, we generated semantic vectors for each entity with different meta information like entity hierarchy, object property, data property, and restrictions and designed a User Interface based system which showed all necessary information about the workflow. Further info, download details and project results are available here

Meet our Mentors

First of all, a big shout out and thank you to all mentors and co-mentors who helped our students to succeed in their endeavours.

  • Aman Mehta, former GSoC student and current junior mentor, recently interned as a software engineer at Facebook, London.
  • Beyza Yaman, a senior mentor and organizational admin, Post-Doctoral Researcher based in ADAPT, Dublin City University, former Springer Nature-DBpedia intern and former research associate at the InfAI/University of Leipzig. She is responsible for the Turkish DBpedia and her field of interests are information retrieval, data extraction and integration over Linked Data.
  • Tommaso Soru, senior mentor and organizational admin. I’m a Machine Learning & AI enthusiast, Data Scientist at Data Lens Ltd in London and a PhD candidate at the University of Leipzig. 

“DBpedia is my window to the world of semantic data, not only for its intuitive interface but also because its knowledge is organised in a simple and uncomplicated way”

Tommaso Soru, GSoC 2019
  • Amandeep Srivastava, Junior Mentor and analyst at Goldman Sachs. He’s a huge fan of Christopher Nolan and likes to read fiction books in his free time.
  • Diego Moussalem, Senior mentor, Senior Researcher at Paderborn University, an active and vital member of the Portuguese DBpedia Chapter
  • Luca Virgili, currently a Computer Science PhD student at the Polytechnic University of Marche.He was a GSoC student for a year and a GSoC mentor for 2 years in DBpedia. 
  • Bharat Suri, former GSOC student, Junior Mentor, Masters degree in Computer Science at The Ohio State University

“I have thoroughly enjoyed both my years of GSoC with DBpedia and I plan to stay and help out in whichever way I can”

Bharat Suri, GSoC 2019
  • Mariano Rico, senior mentor,  Senior Doctor Researcher at Ontology Engineering Group, Universidad Politécnica de Madrid.
  • Nausheen Fatma, senior mentor, Data Scientist, Natural Language Processing, Machine Learning at Info Edge (naukri.com).
  • Ram G Athreya long-term GSoC mentor, Research Engineer at Viv Labs, Bay Area, San Francisco. 
  • Ricardo Usbeck, team leader ‘Conversational AI and Knowledge Graphs’ at Fraunhofer IAIS.
  • Rricha Jalota, former GSoC students, current senior mentor, developer in the Data Science Group at University of Paderborn, Germany 

“The reason why I love collaborating with DBpedia (apart from the fact that, it’s a powerhouse of knowledge-driven applications) is not only it gave me my first big break to the amazing field of NLP but also to the world of open-source!”

Rricha Jalota, GSoC 2019

In addition, we also like to thank the rest of our mentor team namely, Thiago Castro Ferreira, Aashay Singhal and Krishanu Konar, former GSoC student and current senior mentor, for their great work.  

Mentor Summit Recap 

This GSoC marked the 15th consecutive year of the program and was the 8th season in a row for DBpedia. As usual in each year we had two of our mentors, Rricha Jalota and Aashay Singhal joining the annual GSoC mentor summit. Selected mentors get the chance to meet each other and engage in a vital knowledge and expertise exchange around various GSoC related and non-related topics. Apart from more entertaining activities such as games, a scavenger hunt and a guided trip through Munich mentors also discussed pressing questions such as “why is it important to fail your students” or “how can we have our GSoC students stay and contribute for long”.

After GSoC is before the next GSoC

If you are interested in either mentoring a DBpedia GSoC project or if you want to contribute to a project of your own we are happy to have you on board. There are a few things to get you started.

Likewise, if you are an ambitious student who is interested in open source development and working with DBpedia you are more than welcome to either contribute your own project idea or apply for project ideas we offer starting in early 2020.

Stay tuned, frequently check Twitter or the DBpedia Forum to stay in touch and don’t miss your chance of becoming a crucial force in this endeavour as well as a vital member of the DBpedia community.

See you soon,

yours

DBpedia Association

Artificial Intelligence (AI) and DBpedia

Artificial Intelligence (AI) is currently the central subject of the just announced ‘Year of Science’  by the Federal German Ministry. In recent years, new approaches were explored on how to facilitate AI, new mindsets were established and new tools were developed, new technologies implemented. AI is THE key technology of the 21st century. Together with Machine Learning (ML), it transforms society faster than ever before and, will lead humankind to its digital future.

In this digital transformation era, success will be based on using analytics to discover the insights locked in the massive volume of data being generated today. Success with AI and ML depends on having the right infrastructure to process the data.[1]

The Value of Data Governance

One key element to facilitate ML and AI for the digital future of Europe, are ‘decentralized semantic data flows’, as stated by Sören Auer, a founding member of DBpedia and current director at TIB, during a meeting about the digital future in Germany at the Bundestag. He further commented that major AI breakthroughs were indeed facilitated by easily accessible datasets, whereas the Algorithms used were comparatively old.

In conclusion, Auer reasons that the actual value lies in data governance. Infact, in order to guarantee progress in  AI, the development of a common and transparent understanding of data is necessary. [2]

DBpedia Databus – Digital Factory Platform

The DBpedia Databus  – our digital factory Platform –  is one of many drivers that will help to build the much-needed data infrastructure for ML and AI to prosper.  With the DBpedia Databus, we create a hub that facilitates a ‘networked data-economy’ revolving around the publication of data. Upholding the motto, Unified and Global Access to Knowledge, the databus facilitates exchanging, curating and accessing data between multiple stakeholders  – always, anywhere. Publishing data on the Databus means connecting and comparing (your) data to the network. Check our current DBpedia releases via http://downloads.dbpedia.org/repo/dev/.

DBpediaDay – & AI for Smart Agriculture

Furthermore, you can learn about the DBpedia Databus during our 13th DBpedia Community meeting, co-located with LDK conference,  in Leipzig, May 2019. Additionally, as a special treat for you, we also offer an AI side-event on May 23rd, 2019.

May we present you the thinktank and hackathon  – “Artificial Intelligence for Smart Agriculture”. The goal of this event is to develop new ideas and small tools which can demonstrate the use of AI in the agricultural domain or the use of AI for a sustainable bio-economy. In that regard, a special focus will be on the use and the impact of linked data for AI components. 

In short, the two-part event, co-located with LSWT & DBpediaDay, comprises workshops, on-site team hacking as well as presentations of results. The activity is supported by the projects DataBio and Bridge2Era as well as CIAOTECH/PNO. All participating teams are invited to join and present their projects. Further Information are available here. Please submit your ideas and projects here.  

 

Finally, the DBpedia Association is looking forward to meeting you in Leipzig, home of our head office. Pay us a visit!

____

Resources:

[1] Zeus Kerravala; The Success of ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING Requires an Architectural Approach to Infrastructure. ZK Research: A Division of Kerravala Consulting © 2018 ZK Research, available via http://bit.ly/2UwTJRo

[2] Sören Auer; Statement at the Bundestag during a meeting in AI, Summary is available via https://www.tib.eu/de/service/aktuelles/detail/tib-direktor-als-experte-zu-kuenstlicher-intelligenz-ki-im-deutschen-bundestag/

The Release Circle – A Glimpse behind the Scenes

As you already know, with the new DBpedia strategy our mode of publishing releases changed.  The new DBpedia release process follows a three-step approach starting from the Extraction to ID-Management towards the Fusion, which finalizes the release process.  Our DBpedia releases are currently published on a monthly basis. In this post, we give you insight into the single steps of the release process and into what our developers actually do when preparing a DBpedia release.

Extraction  – Step one of the Release

The good news is, our new release mode is taking shape and noticeable picked up speed. Finally the 2018-08 and, additionally the 2018.09.12 and the 2018.10.16 Releases are now available in our LTS repository.

The 2018-08 Release was generated on the basis of the Wikipedia datasets extracted in early August and currently comprises 136 languages. The extraction release contains the raw extracted data generated by the DBpedia extraction-framework. The post-processing steps, such as data-deduplication or URI-normalization are omitted and moved to later parts of the release process. Thus, we can provide direct, transparent access to the generated data in every step. Until we manage two releases per month, our data is mostly based on the second Wikipedia datasets of the previous month. In line with that, the 2018.09.12 release is based on late August data and the recent 2018.10.16 Release is based on Wikipedia datasets extracted on September 20th. They all comprise 136 languages and contain a stable list of datasets since the 2018-08 release.

Our releases are now ready for parsing and external use. Additionally, there will be a new Wikidata-based release this week.

ID-Management – Step two of the Release

For a complete “new DBpedia” release the DBpedia ID-Management and Fusion of the data have to be added to the process. The Databus ID Management is a process to unify various different IRIs identifying the same entities coined from different data providers. Taking datasets with overlapping domains of interest from multiple data providers, the set of IRIs denoting the entities in the source datasets are determined heuristically (e.g. excluding RDF/OWL types/classes).

Afterwards, these selected IRIs a numeric primary key, the ‘Singleton ID’. The core of the ID Management process happens in the next step: Based on the large set of owl:sameAs assertions in the input data with high confidence, the connected components induced from the corresponding sameAs-graph is computed. In other words: The groups of all entities from the input datasets (transitively) reachable from one to another are determined. We dubbed these groups the sameAs-clusters. For each sameAs-cluster we pick one member as representant, which determines the ‘Cluster ID’ or ‘Global Identifier’ for all cluster members.

Apart from being an essential preparatory step for the Fusion, these Global Identifiers serve purpose in their own right as unified Linked Data identifiers for groups of Linked Data entities that should be viewed as equivalent or ‘the same thing’.

A processing workflow based on Apache Spark to perform the process described on above for large quantities of RDF input data is already in place and has been run successfully for a large set of DBpedia inputs consisting of:

 

Fusion – Step three of the Release

Based on the extraction and the ID-Management, the Data Fusion finalizes the last step of the  DBpedia release cycle. With the goal of improving data quality and data coverage, the process uses the DBpedia global IRI clusters to fuse and enrich the source datasets. The fused data contains all resource of the input datasets. The fusion process is based on a functional property decision to decide the number of selected values ( owl:FunctionalProperty determination ). Further, the value selection for this functional properties is based on a preference dependent on the originated source dataset. For example, preferred values for En-DBpedia over DE-DBpedia.

The enrichment improves entity-properties and -values coverage for resources only contained in the source data. Furthermore, we create provenance data to keep track of the origin of each triple. This provenance data is also used for the http-based http://global.dbpedia.org resource view.

At the moment the fused and enriched data is available for the generic, and mapping-based extractions. More datasets are still in progress.  The DBpedia-fusion data is uploading to http://downloads.dbpedia.org/repo/dev/fusion/

Please note we are still in the midst of the beta testing for our data release tool, so in case you do come across any errors, reporting them to us is much appreciated to fuel the testing process.

Further information regarding the releases progress can be found here: http://dev.dbpedia.org/

Next steps

We will add more releases to the repository on a monthly basis aiming for a bi-weekly release mode as soon as possible. In between the intervals, any mistakes or errors you find and report in this data can be fixed for the upcoming release. Currently, the generated metadata in the DataID-file is not stable. This will fluctuate, still needs to be improved and will change in the near future.  We are now working on the next release and will inform you as soon as it is published.

Yours DBpedia Association

This blog post was written with the help of our DBpedia developers Robert Bielinski, Markus Ackermann and Marvin Hofer who were responsible for the work done with respect to the DBpedia releases. We like to thank them for their great work. 

 

Retrospective: GSoC 2018

With all the beta-testing, the evaluations of the community survey part I and part II and the preparations for the Semantics 2018 we lost almost sight of telling you about the final results of GSoC 2018. Following we present you a short recap of this year’s students and projects that made it to the finishing line of GSoC 2018.

 

Et Voilà

We started out with six students that committed to GSoC projects. However, in the course of the summer, some dropped out or did not pass the midterm evaluation. In the end, we had three finalists that made it through the program.

Meet Bharat Suri

… who worked on “Complex Embeddings for OOV Entities”. The aim of this project was to enhance the DBpedia Knowledge Base by enabling the model to learn from the corpus and generate embeddings for different entities, such as classes, instances and properties.  His code is available in his GitHub repository. Tommaso Soru, Thiago Galery and Peng Xu supported Bharat throughout the summer as his DBpedia mentors.

Meet Victor Fernandez

.. who worked on a “Web application to detect incorrect mappings across DBpedia’s in different languages”. The aim of his project was to create a web application and API to aid in automatically detecting inaccurate DBpedia mappings. The mappings for each language are often not aligned, causing inconsistencies in the quality of the RDF generated. The final code of this project is available in Victor’s repository on GitHub. He was mentored by Mariano Rico and Nandana Mihindukulasooriya.

Meet Aman Mehta

.. whose project aimed at building a model which allows users to query DBpedia directly using natural language without the need to have any previous experience in SPARQL. His task was to train a Sequence-2-Sequence Neural Network model to translate any Natural Language Query (NLQ) into the corresponding sentence encoding SPARQL query. See the results of this project in Aman’s GitHub repositoryTommaso Soru and Ricardo Usbeck were his DBpedia mentors during the summer.

Finally, these projects will contribute to an overall development of DBpedia. We are very satisfied with the contributions and results our students produced.  Furthermore, we like to genuinely thank all students and mentors for their effort. We hope to be in touch and see a few faces again next year.

A special thanks goes out to all mentors and students whose projects did not make it through.

GSoC Mentor Summit

Now it is the mentors’ turn to take part in this year GSoC mentor summit, October 12th till 14th. This year, Mariano Rico and Thiago Galery will represent DBpedia at the event. Their task is to engage in a vital discussion about this years program, about lessons learned, highlights and drawbacks they experienced during the summer. Hopefully, they return with new ideas from the exchange with mentors from other open source projects. In turn, we hope to improve our part of the program for students and mentors.

Sit tight, follow us on Twitter and we will update you about the event soon.

Yours DBpedia Association