Category Archives: Google Summer of Code

Meeting with the US-DBpedians – A Wrap-Up

One lightning event after the other. Just four weeks after our Amsterdam Community Meeting, we crossed the Atlantic for the third time to meet with over 110 US-based DBpedia enthusiasts. This time, the DBpedia Community met in Cupertino, California and was hosted at Apple Inc. 

Main Event

First and foremost, we would like to thank Apple for the warm welcome and the hosting of the event.

After a Meet & Greet with refreshments, Taylor Rhyne, Eng. Product Manager at Apple, and Pablo N. Mendes, Researcher at Apple and chair of the DBpedia Community Committee, opened the main event with a short introduction setting the tone for the following 2 hours.

The main event attracted attendees with eleven invited talks from major companies of the Bay Area actively using DBpedia or interested in knowledge graphs in general such as Diffbot, IBM, Wikimedia, NTENT, Nuance, Volley and Stardog Union.

Tommaso Soru – University of Leipzig

Tommaso Soru (University of Leipzig), DBpedia mentor in our Google Summer of Code (GSoC) projects, opened the invited talks session with the updates from the DBpedia developer community. This year, DBpedia participated in the GSoC 2017 program with 7 different projects including “First Chatbot for DBpedia”, which was selected as Best DBpedia GSoC Project 2017. His presentation is available here. 

DBpedia likes to thank the following poeple for organizinga nd hosting our Community Meeting in Cupertino, California.

 

Invited Talks- A Short Recap

Filipe Mesquita (Diffbot) introduced the new DBpedia NLP Department, born from a recent partnership between our organization and the California based company, which aims at creating the most accurate and comprehensive database of human knowledge. His presentation is available here. Dan Gruhl (IBM Research) held a presentation about the in-house development of an omnilingual ontology and how DBpedia data supported this

Filipe Mesquita – Diffbot

endeavor. Stas Malyshev representative for Dario Taraborelli (both Wikimedia Foundation) presented the current state of the structured data initiatives at Wikidata and the query capabilities for Wikidata. Their slides are available here and here. Ricardo Baeza-Yates (NTENT) gave a short talk on mobile semantic search.

The second part of the event saw Peter F. Patel-Schneider (Nuance) holding a presentation with the title “DBpedia from the Fringe” giving some insights on how DBpedia could be further improved. Shortly after, Sebastian Hellmann, Executive Director of the DBpedia Association, walked the stage and presented the state of the art of the association, including achievements and future goals. Sanjay Krishnan (U.C. Berkeley) talked about the link between AlphaGo and data cleansing. You can find his slides here.  Bill Andersen (Volley.com) argued for the use of extremely precise and fine-grained approaches to deal with small data. His presentation is available here. Finally, Michael Grove (Stardog Union) stressed on the view of knowledge graphs as knowledge toolkits backed by a graph data model.

Michael Grove – Stardog Union

The event concluded with refreshments, snacks and drinks served in the atrium allowing to talk about the presented topics, discuss the latest developments in the field of knowledge graphs and network between all participants. In the end, this closing session was way longer than had been planned.

GSoC Mentor Summit

Shortly after the CA Community Meeting, our DBpedia mentors Tommaso Soru and Magnus Knuth participated at the Google Summer of Code Mentor Summit held in Sunnyvale California. During free sessions hosted by mentors of diverse open source organizations, Tommaso and Magnus presented selected projects during their lightning talks. Beyond open source, open data topics have been targeted in multiple sessions, as this is not only relevant for research, but there is also a strong need in software projects. The meetings paved the way for new collaborations in various areas, e.g. the field of question answering over the DBpedia knowledge corpus, in particular the use of Neural SPARQL Machines for the translation of natural language into structured queries. We expect that this hot deep-learning topic will be featured in the next edition of GSoC projects. Overall, it has been a great experience to meet so many open source fellows from all over the world.

Upcoming events

After the event is before another ….

Connected Data London, November 16th, 2017.

Sebastian Hellmann, executive director of the DBpedia Association will present Data Quality and Data Usage in a large-scale Multilingual Knowledge Graph during the content track at the Connected Data in London. He will also join the panelists in the late afternoon panel discussion about Linked Open Data: Is it failing or just getting out of the blocks? Feel free to join the event and support DBpedia.

A message for all  DBpedia enthusiasts – our next Community Meeting

Currently we are planning our next Community Meeting  and would like to invite DBpedia enthusiasts and chapters who like to host a meeting to send us their ideas to dbpedia@infai.org. The meeting is scheduled for the beginning of 2018. Any suggestions regarding place, time, program and topics are welcome!  

Check our website for further updates, follow us on #twitter or subscribe to our newsletter.

We will keep you posted

Your DBpedia Association

DBpedia will meet the US-based Community

Only 8 days left to reserve your seat for our 3rd US DBpedia Community Meeting. We are happy to announce that the 11th DBpedia Meeting will be held in Cupertino, California on October 12th 2017, hosted by Apple Inc.

The meetup focuses on connecting the community interested in DBpedia and Knowledge Graphs in general, has included lightning talks by distinguished speakers (e.g. from Stanford, Google, IBM Watson, Netflix, LinkedIn, Wikimedia Foundation, Nuance, etc.). Talk topics have extended also to natural language processing, knowledge representation, information extraction, integration and retrieval, graph databases, knowledge base embeddings and machine learning.

We are looking forward to meeting again in person with the US-based DBpedia Community.

Quick facts

  • Host: Apple Inc.
  • Registration: through eventbrite (limited seats)

Schedule

Please check our schedule for the next DBpedia Community Meeting here: http://wiki.dbpedia.org/meetings/California2017

Acknowledgments

If you would like to become a sponsor for the 11th DBpedia Meeting, please contact the DBpedia Association.

Apple Inc. For sponsoring catering and hosting our meetup on their campus.
Google Summer of Code 2017 Amazing program and the reason some of our core DBpedia devs are visiting California
ALIGNED – Software and Data Engineering For funding the development of DBpedia as a project use-case and covering part of the travel cost
Institute for Applied Informatics For supporting the DBpedia Association
OpenLink Software For continuous hosting of the main DBpedia Endpoint

Organisation

Registration

Attending the DBpedia Community Meeting is free of charge, but seats are limited. Make sure to register to reserve a seat.

We are looking forward to meeting you in California.

Check our website for further updates, follow us on #twitter or subscribe to our newsletter.

Your DBpedia Association

 

GSoC 2017 – Recap and Results

We are very pleased to announce that all of this year’s Google Summer of Code students made it successful through the program and passed their projects. All codes have been submitted, merged and are ready to be examined by the rest of the world.

Marco Fossati, Dimitris Kontokostas, Tommaso Soru, Domenico Potena, Emanuele Storti , anastasia Dimiou, Wouter Maroy, Peng Xu, Sandro Coelho and Ricardo Usbeck, members of the DBpedia Community, did a great job in mentoring 7 students from around the world. All of the students enjoyed the experiences made during the program and will hopefully continue to contribute to DBpedia in the future.

“GSoC is the perfect opportunity to learn from experts, get to know new communities, design principles and work flows.” (Ram G Athreya)”

Now, we would like to take that opportunity to  give you a little recap of the projects mentored by DBpedia members during the past months. Just click below for more details .

 

DBpedia Mappings Front-End Administration by Ismael Rodriguez

The goal of the project was to create a front-end application that provides a user-friendly interface so the DBPedia community can easily view, create and administrate DBpedia mapping rules using RML. The developed system includes user administration features, help posts, Github mappings synchronization, and rich RML related features such as syntax highlighting, RML code generation from templates, RML validation, extraction and statistics. Part of these features are possible thanks to the interaction with the DBPedia Extraction Framework. In the end, all the functionalities and goals that were required have been developed, with many functional tests and the approval of the DBpedia community. The system is ready for production deployment. For further information, please visit the project blog.  Mentors: Anastasia Dimou and Wouter Maroy (Ghent University), Dimitris Kontokostas (GeoPhy HQ).

Chatbot for DBpedia by Ram G Athreya

DBpedia Chatbot is a conversational chatbot for DBpedia which is accessible through the following platforms: a Web Interface, Slack and Facebook Messenger.

The bot is capable of responding to users in the form of simple short text messages or through more elaborate interactive messages. Users can communicate or respond to the bot through text and also through interactions (such as clicking on buttons/links). The bot tries to answer text based questions of the following types: natural language questions, location information, service checks, language chapters, templates and banter. For more information, please follow the link to the project site. Mentor: Ricardo Usbeck (AKSW).

Knowledge Base Embeddings for DBpedia by Nausheen Fatma

Knowledge base embeddings has been an active area of research. In recent years a lot of research work such as TransE, TransR, RESCAL, SSP, etc. has been done to get knowledge base embeddings. However none of these approaches have used DBpedia to validate their approach. In this project, I want to achieve the following tasks: i) Run the existing techniques for KB embeddings for standard datasets. ii) Create an equivalent standard dataset from DBpedia for evaluations. iii) Evaluate across domains. iv) Compare and Analyse the performance and consistency of various approaches for DBpedia dataset along with other standard datasets. v) Report any challenges that may come across implementing the approaches for DBpedia. For more information, please follow the links to her project blog and GitHub-repository. Mentors: Tommaso Soru (AKSW) and  Sandro Coelho (KILT).

Knowledge Base Embeddings for DBpedia by Akshay Jagatap

The project defined embeddings to represent classes, instances and properties by implementing Random Vector Accumulators with additional features in order to better encode the semantic information held by the Wikipedia corpus and DBpedia graphs. To test the quality of embeddings generated by the RVA, lexical memory vectors of locations were generated and tested on a modified subset of the Google Analogies Test Set. Check out further information via Akshay’s GitHub-repo. Mentors: Tommaso Soru (AKSW) and Xu Peng (University of Alberta).

The Table Extractor by Luca Vergili

Wikipedia is full of data hidden in tables. The aim of this project was to explore the possibilities of exploiting all the data represented with the appearance of tables in Wiki pages, in order to populate the different chapters of DBpedia through new data of interest. The Table Extractor has to be the engine of this data “revolution”: it would achieve the final purpose of extracting the semi structured data from all those tables now scattered in most of the Wiki pages. In this page you can observe dataset (english and italian) extracted using table extractor . Furthermore you can read log file created in order to see all operations made up for creating RDF triples. I recommend to also see this page, that contains the idea behind the project and an example of result extracted from log files and .ttl dataset. For more details see Luca’s Git-Hub repository. Mentors: Domenico Potena and Emanuele Storti (Università Politecnica delle Marche).

 

Unsupervised Learning of DBpedia Taxonomy by Shashank Motepalli

Wikipedia represents a comprehensive cross-domain source of knowledge with millions of contributors. The DBpedia project tries to extract structured information from Wikipedia and transform it into RDF.

The main classification system of DBpedia depends on human curation, which causes it to lack coverage, resulting in a large amount of untyped resources. DBTax provides an unsupervised approach that automatically learns a taxonomy from the Wikipedia category system and extensively assigns types to DBpedia entities, through the combination of several NLP and interdisciplinary techniques. It provides a robust backbone for DBpedia knowledge and has the benefit of being easy to understand for end users. details about his work and his code can e found on the projects site. Mentors: Marco Fossati (Università degli Studi di Trento) and Dimitris Kontokostas (GeoPhy HQ). 

The  Wikipedia List-Extractor by Krishanu Konar

This project aimed to augment upon the already existing list-extractor project by Federica in GSoC 2016. The project focused on the extraction of relevant but hidden data which lies inside lists in Wikipedia pages. Wikipedia, being the world’s largest encyclopedia, has humongous amount of information present in form of text. While key facts and figures are encapsulated in the resource’s infobox, and some detailed statistics are present in the form of tables, but there’s also a lot of data present in form of lists which are quite unstructured and hence its difficult to form into a semantic relationship. The main objective of the project was to create a tool that can extract information from Wikipedia lists and form appropriate RDF triplets that can be inserted in the DBpedia dataset. Fore details on the code and about the project check Krishanu’s blog and GitHub-repository. Mentors: Marco Fossati (Università degli Studi di Trento), Domenico Potena and Emanuele Storti (Università Politecnica delle Marche). 

Read more

We are regularly growing our community through GSoC and can deliver more and more opportunities to you. Ideas and applications for the next edition of GSoC are very much welcome. Just contact us via email or check our website for details.

Again, DBpedia is planning to be a vital part of the GSoC Mentor Summit, from October 13th -15th, at the Google Campus in Sunnyvale California. This summit is a way to say thank you to the mentors for the great job they did during the program. Moreover it is a platform to discuss what can be done to improve GSoC and how to keep students involved in their communities post-GSoC.

And there is more good news to tell.  DBpedia wants to meet up with the US community during the 11th DBpedia Community Meeting in California.  We are currently working on the program and keep you posted as soon as registration is open.

So, stay tuned and check  Twitter, Facebook and the Website or subscribe to our Newsletter for latest news and updates.

See you soon!

Yours,

DBpedia Association

Career Opportunities at DBpedia – A Success Story

Google summer of Code is a global program focused on introducing students to open source software development.

During the 3 months summer break from university, students work on a programming projects  with an open source organization, like DBpedia. 

We are part of this exciting program for more than 5 years now. Many exciting projects developed as results of intense coding during hot summers. Presenting you Wouter Maroy, who has been a GSoC student at GSoc 2016 and who is currently a mentor in this years program, we like to give you a glimpse behind the scenes and show you how important the program is to DBpedia.


Success Story: Wouter Maroy

Who are you?

I’m Wouter Maroy, a 23 years old Master’s student in Computer Science Engineering at Ghent University (Belgium). I’m affiliated with IDLab – imec. Linked Data and Big Data technologies are my two favorite fields of interest. Besides my passion for Computer Science, I like to travel, explore and look for adventures. I’m a student who enjoys his student life in Ghent.  

What is your main interest in DBpedia and what was your motivation to apply for a DBpedia project at GSoC 2016.

I took courses during my Bachelors with lectures about Linked Data and the Semantic Web which of course included DBpedia; it’s an interesting research field. Before my GSoC 2016 application I did some work on Semantic Web technologies and on a technology (RML) that was required for a GSoC 2016 project that was listed by DBpedia. I wanted to get involved in Open Source and DBpedia, so I applied.

What did you do?

DBpedia has used a custom mapping language up until now to generate structured data from raw data from Wikipedia infoboxes. A next step was to improve this process to a more modular and generic approach that leads to higher quality linked data generation . This new approach relied on the integration of RML, the RDF Mapping Language and was the goal of the GSoC 2016 project I applied for. Understanding all the necessary details about the GSoC project required some effort and research before I started with coding. I also had to learn a new programming language (Scala). I had good assistance from my mentors so this turned out very well in the end.  DBpedia’s Extraction Framework, which is used for extracting structured data from Wikipedia, has a quite large codebase. It was the first project of this size I was involved in. I learned a lot from reading its codebase and from contributing by writing code during these months.

Dimitris Kontokostas and Anastasia Dimou were my two mentors. They guided me well throughout the project. I interacted daily with them through Slack and each week we had a conference call to discuss the project.  After many months of research, coding and discussing we managed to deliver a working prototype at the end of the project. The work we did was presented in Leipzig on the DBpedia day during SEMANTICS 16’. Additionally, this work will also be presented at ISWC 2017.

You can check out his project here.

How do you currently contribute to improve DBpedia?  

I’m mentoring a GSoC17 project together with Dimitris Kontokostas and Anastasia Dimou as a follow up on the work that was done by our GSoC 2016 project last year. Ismael Rodriguez is the new student who is participating in the project and he already delivered great work! Besides being a mentor for GSoC 2017, I make sure that the integration of RML into DBpedia is going into the right direction in general (managing, coding,…). For this reason, I worked at the KILT/DBpedia office in Leipzig during summer for 6 weeks. Joining and getting to know the team was a great experience.

What did you gain from the project?

Throughout the project I practiced coding, working in a team, … I learned more about DBpedia, RML, Linked Data and other related technologies. I’m glad I had the opportunity to learn this much from the project. I would recommend it to all students who are curious about DBpedia, who are eager to learn and who want to earn a stipend during summer through coding. You’ll learn a lot and you’ll have a good time!

Final words to future GSoC applicants for DBpedia projects.

Give it a shot! Really, it’s a lot of fun! Coding for DBpedia through GSoC is a great, unique experience and one who is enthusiastic about coding and the DBpedia project should definitely go for it.

 

So, follow us on Twitter, Facebook and Subscribe to our Newsletter to never miss any information about GSoC 2018 projects or internship opportunities.

 

Yours

 

DBpedia Association

 

GSoC 2017- may the code be with you

GSoC students have finally been selected.

We are very excited to announce this year’s final students for our projects  at the Google Summer of Code program (GSoC).

Google Summer of Code is a global program focused on bringing more student developers into open source software development. Stipends are awarded to students to work on a specific DBpedia related project together with a set of dedicated mentors during summer 2017 for the duration of three months.

For the past 5 years DBpedia has been a vital part of the GSoC program. Since the very first time many Dbpedia projects have been successfully completed.

In this years GSoC edition, DBpedia received more than 20 submissions for selected DBpedia projects. Our mentors read many promising proposals, evaluated them and now the crême de la crême of students snatched a spot for this summer.  In the end 7 students from around the world were selected and will jointly work together with their assigned mentors on their projects. DBpedia developers and mentors are really excited about this 7 promising student projects.

List of students and projects:

You want to read more about their specific projects? Just click below… or check GSoC pages for details.

 Ismael Rodriguez – Project Description: Although the DBPedia Extraction Framework was adapted to support RML mappings thanks to a project of last year GSoC, the user interface to create mappings is still done by a MediaWiki installation, not supporting RML mappings and needing expertise on Semantic Web. The goal of the project is to create a front-end application that provides a user-friendly interface so the DBPedia community can easily view, create and administrate DBPedia mapping rules using RML. Moreover, it should also facilitate data transformations and overall DBPedia dataset generation. Mentors: Anastasia Dimou, Dimitris Kontokostas, Wouter Maroy 

Ram Ganesan Athreya – Project Description:The requirement of the project is to build a conversational Chatbot for DBpedia which would be deployed in at least two social networks.There are three main challenges in this task. First is understanding the query presented by the user, second is fetching relevant information based on the query through DBpedia and finally tailoring the responses based on the standards of each platform and developing subsequent user interactions with the Chatbot.Based on my understanding, the process of understanding the query would be undertaken by one of the mentioned QA Systems (HAWK, QANARY, openQA). Based on the response from these systems we need to query the DBpedia dataset using SPARQL and present the data back to the user in a meaningful way. Ideally, both the presentation and interaction flow needs to be tailored for the individual social network.I would like to stress that although the primary medium of interaction is text, platforms such as Facebook insist that a proper mix between chat and interactive elements such as images, buttons etc would lead to better user engagement. So I would like to incorporate these elements as part of my proposal.

Mentor: Ricardo Usbeck

 

Nausheen Fatma – Project discription:  Knowledge base embeddings has been an active area of research. In recent years a lot of research work such as TransE, TransR, RESCAL, SSP, etc. has been done to get knowledge base embeddings. However none of these approaches have used DBpedia to validate their approach. In this project, I want to achieve the following tasks: i) Run the existing techniques for KB embeddings for standard datasets. ii) Create an equivalent standard dataset from DBpedia for evaluations. iii) Evaluate across domains. iv) Compare and Analyse the performance and consistency of various approaches for DBpedia dataset along with other standard datasets. v)Report any challenges that may come across implementing the approaches for DBpedia. Along the way, I would also try my best to come up with any new research approach for the problem.

Mentors: Sandro Athaide Coelho, Tommaso Soru

 

Akshay Jagatap – Project Description: The project aims at defining embeddings to represent classes, instances and properties. Such a model tries to quantify semantic similarity as a measure of distance in the vector space of the embeddings. I believe this can be done by implementing Random Vector Accumulators with additional features in order to better encode the semantic information held by the Wikipedia corpus and DBpedia graphs.

Mentors: Pablo Mendes, Sandro Athaide Coelho, Tommaso Soru

 

Luca Virgili –  Project Description: In Wikipedia a lot of data are hidden in tables. What we want to do is to read correctly all tables in a page. First of all, we need a tool that can allow us to capture the tables represented in a Wikipedia page. After that, we have to understand what we read previously. Both these operations seem easy to make, but there are many problems that could arise. The main issue that we have to solve is due to how people build table. Everyone has a particular style for representing information, so in some table we can read something that doesn’t appear in another structure. In this paper I propose to improve the last year’s project and to create a general way for reading data from Wikipedia tables. I want to review the parser for Wikipedia pages for trying to understand more types of tables possible. Furthermore, I’d like to build an algorithm that can compare the column’s elements (that have been read previously by the parser) to an ontology so it could realize how the user wrote the information. In this way we can define only few mapping rules, and we can make a more generalized software.

Mentors: Emanuele Storti, Domenico Potena

 

Shashank Motepalli – Project Description: DBpedia tries to extract structured information from Wikipedia and make information available on the Web. In this way, the DBpedia project develops a gigantic source of knowledge. However, the current system for building DBpedia Ontology relies on Infobox extraction. Infoboxes, being human curated, limit the coverage of DBpedia. This occurs either due to lack of Infoboxes in some pages or over-specific or very general taxonomies. These factors have motivated the need for DBTax.DBTax follows an unsupervised approach to learning taxonomy from the Wikipedia category system. It applies several inter-disciplinary NLP techniques to assign types to DBpedia entities. The primary goal of the project is to streamline and improve the approach which was proposed. As a result, making it easy to run on a new DBpedia release. In addition to this, also to work on learning taxonomy of DBTax to other Wikipedia languages.

Mentors: Marco Fossati, Dimitris Kontokostas

 

Krishanu Konar – Project Description: Wikipedia, being the world’s largest encyclopedia, has humongous amount of information present in form of text. While key facts and figures are encapsulated in the resource’s infobox, and some detailed statistics are present in the form of tables, but there’s also a lot of data present in form of lists which are quite unstructured and hence its difficult to form into a semantic relationship. The project focuses on the extraction of relevant but hidden data which lies inside lists in Wikipedia pages. The main objective of the project would be to create a tool that can extract information from wikipedia lists, form appropriate RDF triplets that can be inserted in the DBpedia dataset.

Mentor: Marco Fossati 

Read more

Congrats to all selected students! We will keep our fingers crossed now and patiently wait until early September, when final project results are published.

An encouraging note to the less successful students.

The competition for GSoC slots is always on a very high level and DBpedia only has a limited amount of slots available for students.  In case you weren’t among the selected, do not give up on DBpedia just yet. There are plenty of opportunities to prove your abilities and be part of the DBpedia experience. You, above all, know DBpedia by heart. Hence, contributing to our support system is not only a great way to be part of the DBpedia community but also an opportunity to be vital to DBpedia’s development. Above all, it is a chance for current DBpedia mentors to get to know you better. It will give your future mentors a chance to  support you and help you to develop your ideas from the very beginning.

Go on you smart brains, dare to become a top DBpedia expert and provide good support for other DBpedia Users. Sign up to our support page  or check out the following ways to contribute:

Get involved:
  • Join our DBpedia-discussion -mailinglist, where we discuss current DBpedia developments. NOTE: all mails announcing tools or call to papers unrelated to DBpedia are not allowed. This is a community discussion list.
  • If you like to join DBpedia developers discussion and technical discussions sign up in Slack
  • Developer Discussion
  • Become a DBpedia Student and sign up for free at the DBpedia Association. We offer special programs that provide training and other opportunities to learn about DBpedia and extend your Semantic Web and programming skills

We are looking forward to working with you!

You don’t have enough of DBpedia yet? Stay tuned and join us on facebook, twitter or subscribe to our newsletter for the latest news!

 

Have a great weekend!

Your

DBpedia Association

DBpedia @ GSoC 2017 – Call for students

DBpedia will participate for a fifth time in the Google Summer of Code program (GSoC) and now we are looking for students who will share their ideas with us. We are regularly growing our community through GSoC and can deliver more and more opportunities to you. We got excited with our new ideas, we hope you will get excited too!

What is GSoC?

Google Summer of Code is a global program focused on bringing more student developers into open source software development. Funds will given to students (BSc, MSc, PhD) to work for three months on a specific task. At first open source organizations announce their student projects and then students should contact the mentor organizations they want to work with and write up a project proposal for the summer. After a selection phase, students are matched with a specific project and a set of mentors to work on the project during the summer.

If you are a GSoC student who wants to apply to our organization, please check our guideline here: http://wiki.dbpedia.org/gsoc2017

Here you can see the Google Summer of Code 2017 timeline:

March 20th, 2017 Student applications open (Students can register and submit their applications to mentor organizations.)
April 3rd, 2017 Student application deadline
May 4th, 2017 Accepted students are announced and paired with a mentor.
May 30th, 2017 Coding officially begins!
August 21st, 2017 Final week: Students submit their final work product and their final mentor evaluation
September 6th, 2017 Final results of Google Summer of Code 2017 announced

Check our website for further updates, follow us on #twitter or subscribe to our newsletter.

We are looking forward to your input.

Your DBpedia Association

DBpedia @ GSoC 2017 – Call for ideas & mentors

Dear DBpedians,

As previous years, we would like your input for DBpedia related project ideas for GSoC 2017.

For those who are unfamiliar with GSoC (Google Summer of Code), Google pays students (BSc, MSc, PhD) to work for 3 months on an open source project. Open source organizations announce their student projects and students apply for projects they like. After a selection phase, students are matched with a specific project and a set of mentors to work on the project during the summer.

Here you can see the Google Summer of Code 2017 timeline: https://developers.google.com/open-source/gsoc/timeline

or please check:  http://wiki.dbpedia.org/gsoc2016

If you have a cool idea for DBpedia or want to co-mentor an existing cool idea go here (All mentors get a free Google T-shirt and get the chance to go Google HQs in November.).

DBpedia applied for the fifth time to participate in the Google Summer of Code program. Here you will find a list of all projects and students from GSoC 2016: http://blog.dbpedia.org/2016/04/26/dbpedia-google-summer-of-code-2016/

Check our website for further updates, follow us on #twitter or subscribe to our newsletter.

Looking forward to your input.

Your DBpedia Association

Retrospective: 2nd US DBpedia Community meeting in California

After the largest DBpedia meeting to date we decided it was time to cross the Atlantic for the second time for another meetup. Two weeks ago the 8th DBpedia Community Meeting was held in Sunnyvale, California on October 27th 2016.

Main Event

Pablo Mendes from Lattice Data Inc. opened the main event with a short introduction setting the tone for the evening. After that Dimitris Kontokostas gave technical and organizational DBpedia updates. The main event attracted attendees with lightning talks from major companies actively using DBpedia or interested in knowledge graphs in general.

Four major institutions described their efforts to organize reusable information in a centralized knowledge representation. Google’s Tatiana Libman presented (on behalf of Denny Vrandečić) the impressive scale of the Google Knowledge graph, with 1B+ entities and over 100 billion facts.

Tatiana Libman from Google
Tatiana Libman from Google

Yahoo’s Nicolas Torzec presented the Yahoo knowledge graph, with focus on their research on extracting data from Web tables to expand their knowledge which includes DBpedia as an important part. Qi He from LinkedIn focused mostly on how to model a knowledge graph of people and skills, which becomes particularly interesting with the possibility of integration with Microsoft’s Satori Graph. Such an integration would allow general domain knowledge and very specific knowledge about professionals complementing one another. Stas Malyshev from Wikidata presented statistics on their growth, points of contact with DBpedia as well as an impressive SPARQL query interface that can be used to query the structured data that they are generating.

Three other speakers focused on the impact of DBpedia in machine learning and natural language processing. Daniel Gruhl from IBM Watson gave the talk “Truth for the impatient” where he showed that a knowledge model built from DBpedia can help costs and time to value for extracting entity mentions with higher accuracy. Pablo Mendes from Lattice Data Inc. presented their approach that leverages DBpedia and other structured information sources for weak supervision to obtain very strong NLP extractors. Sujan Perera from IBM Watson discussed the problem of identifying implicit mentions of entities in tweets and how the knowledge represented in DBpedia can be used to help uncover those references.

Another three speakers focused on applications of DBpedia and knowledge graphs. Margaret Warren from Metadata Authoring Systems, LLC presented ImageSnippets and how background knowledge from DBpedia allows better multimedia search through inference. For instance, by searching for “birds” you may find pictures that haven’t been explicitly tagged as birds but for which the fact can be inferred from DBpedia. Jans Aasman from Franz Inc presented their company’s approach to Data Exploration with Visual SPARQL Queries. They described opportunities for graph analytics in the medical domain, and discussed how DBpedia has been useful in their applications. Finally, Wang-Chiew Tan presented their research at RIT relating to building chatbots, among other projects that relate to using background knowledge stored in computers to enrich real life experiences.

8th-dbpedia-meeting_california
Nicolas Torzec from Yahoo

Overall the talks were very high quality and fostered plenty of discussions afterwards. We finalized the event with a round of introductions where every attendee got to say their name and affiliation to help them connect with one another throughout the final hour of the event.

All slides and presentations are also available on our Website and you will find more feedback and photos about the event on Twitter via #DBpediaCA.

We would like to thank Yahoo for hosting the event, Google Summer of Code 2016 mentor summit as the reason we were in the area and collocated the DBpedia meeting, the Institute for Applied Informatics for supporting the DBpedia Association, ALIGNED – Software and Data Engineering for funding the development of DBpedia as a project use-case and last but not least OpenLink Software for continuous hosting the main DBpedia Endpoint.

Thanks to Pablo Mendes for providing oneliner summaries for the talks 🙂

So now, we are looking forward to the next DBpedia community meeting which will be held in Europe again. We will keep you informed via the DBpedia Website and Blog.

Your DBpedia Association

DBpedia @ Google Summer of Code 2016

DBpedia participated for a fourth time in the Google summer of code program. This was a quite competitive year (like every year) where more than fourty students applied for a DBpedia project. In the end,  8 great students from all around the world  were selected and will work on their projects during the summer. Here’s a detailed list of the projects:

A Hybrid Classifier/Rule-based Event Extractor for DBpedia Proposal by Vincent Bohlen

In modern times the amount of information published on the internet is growing to an immeasurable extent. Humans are no longer able to gather all the available information by hand but are more and more dependent on machines collecting relevant information automatically. This is why automatic information extraction and in especially automatic event extraction is important. In this project I will implement a system for event extraction using Classification and Rule-based Event Extraction. The underlying data for both approaches will be identical. I will gather wikipedia articles and perform a variety of NLP tasks on the extracted texts. First I will annotate the named entities in the text using named entity recognition performed by DBpedia Spotlight. Additionally I will annotate the text with Frame Semantics using FrameNet frames. I will then use the collected information, i.e. frames, entities, entity types, with the aforementioned two different methods to decide if the collection is an event or not. Mentor: Marco Fossati (SpazioDati)

Automatic mappings extraction by Aditya Nambiar

DBpedia currently maintains a mapping between Wikipedia info-box properties to the DBpedia ontology, since several similar templates exist to describe the same type of info-boxes. The aim of the project is to enrich the existing mapping and possibly correct the incorrect mapping’s using Wikidata.

Several wikipedia pages use Wikidata values directly in their infoboxes. Hence by using the mapping between Wikidata properties and DBpedia Ontology classes along with the info-box data across several such wiki pages we can collect several such mappings. The first phase of the project revolves around using various such wikipedia templates , finding their usages across the wikipedia pages and extracting as many mappings as possible.

In the second half of the project we use machine learning techniques to take care of any accidental / outlier usage of Wikidata mappings in wikipedia. At the end of the project we will be able to obtain a correct set of mapping which we can use to enrich the existing mapping. Mentor: Markus Freudenberg (AKSW/KILT)

Combining DBpedia and Topic Modelling by wojtuch

DBpedia, a crowd- and open-sourced community project extracting the content from Wikipedia, stores this information in a huge RDF graph. DBpedia Spotlight is a tool which delivers the DBpedia resources that are being mentioned in the document.

Using DBpedia Spotlight to extract Named Entities from Wikipedia articles and then applying a topic modelling algorithm (e.g. LDA) with URIs of DBpedia resources as features would result in a model, which is capable of describing the documents with the proportions of the topics covering them. But because the topics are also represented by DBpedia URIs, this approach could result in a novel RDF hierarchy and ontology with insights for further analysis of the emerged subgraphs.

The direct implication and first application scenario for this project would be utilizing the inference engine in DBpedia Spotlight, as an additional step after the document has been annotated and predicting its topic coverage. Mentor: Alexandru Todor (FU Berlin)

DBpedia Lookup Improvements by Kunal.Jha

DBpedia is one of the most extensive and most widely used knowledge base in over 125 languages. DBpedia Lookup is a tool that allows The DBpedia Lookup is a web service that allows users to obtain various DBpedia URIs for a given label (keywords/anchor texts). The service provides two different types of search APIs, namely, Keyword Search and Prefix Search. The lookup service currently returns the query results in XML (default) and JSON formats and works on English language. It is based on a Lucene Index providing a weighted label lookup, which combines string similarity with a relevance ranking in order to find the most relevant matches for a given label. As a part of the GSOC 2016, I propose to implement improvisations with an intention to make the system more efficient and versatile. Mentor: Axel Ngonga (AKSW)

Inferring infobox template class mappings from Wikipedia + Wikidata by Peng_Xu

This project aims at finding mappings between the classes (eg. dbo:Person, dbo:City) in the DBpedia ontology and infobox templates on pages of Wikipedia resources using machine learning. Mentor: Nilesh Chakraborty (University of Bonn)

Integrating RML in the Dbpedia extraction framework by wmaroy

This project is about integrating RML in the Dbpedia extraction framework. Dbpedia is derived from Wikipedia infoboxes using the extraction framework and mappings defined using the wikitext syntax. A next step would be replacing the wikitext defined mappings with RML. To accomplish this, adjustments will have to be made to the extraction framework. Mentor: Dimitris Kontokostas (AKSW/KILT)

The List Extractor by FedBai

The project focuses on the extraction of relevant but hidden data which lies inside lists in Wikipedia pages. The information is unstructured and thus cannot be easily used to form semantic statements and be integrated in the DBpedia ontology. Hence, the main task consists in creating a tool which can take one or more Wikipedia pages with lists within as an input and then construct appropriate mappings to be inserted in a DBpedia dataset. The extractor must prove to work well on a given domain and to have the ability to be expanded to reach generalization. Mentor: Marco Fossati (SpazioDati)

The Table Extractor by s.papalini

Wikipedia is full of data hidden in tables. The aim of this project is to exploring the possibilities of take advantage of all the data represented with the appearance of tables in Wiki pages, in order to populate the different versions of DBpedia with new data of interest. The Table Extractor has to be the engine of this data “revolution”: it would achieve the final purpose of extract the semi structured data from all those tables now scattered in most of the Wiki pages. Mentor: Marco Fossati (SpazioDati)

At the begining of September 2016 you will receive news about successfull Google Summer of Code 2016 student projects. Stay tuned and follow us on  facebook, twitter or visit our website for the latest news.

 
Your DBpedia Association

We proudly present our new 2015-10 DBpedia release, which is abailable now via:  http://dbpedia.org/sparql. Go an check it out!

This DBpedia release is based on updated Wikipedia dumps dating from October 2015 featuring a significantly expanded base of information as well as richer and cleaner data based on the DBpedia ontology.

So, what did we do?

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2015-10 ontology encompasses

  • 739 classes (DBpedia 2015-04: 735)
  • 1,099 properties with reference values (a/k/a object properties) (DBpedia 2015-04: 1,098)
  • 1,596 properties with typed literal values (a/k/a datatype properties) (DBpedia 2015-04: 1,583)
  • 132 specialized datatype properties (DBpedia 2015-04: 132)
  • 407 owl:equivalentClass and 222 owl:equivalentProperty mappings external vocabularies (DBpedia 2015-04: 408 and 200, respectively)

The editors community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2015-10 extraction, we used a total of 5553 template mappings (DBpedia 2015-04: 4317 mappings). For the first time the top language, gauged by number of mappings, is Dutch (606 mappings), surpassing the English community (600 mappings).

And what are the (breaking) changes ?

  • English DBpedia switched to IRIs from URIs. 
  • The instance-types dataset is now split to two files:
    • “instance-types” contains only direct types.
    • “Instance-types-transitive” contains transitive types.
    • The “mappingbased-properties” file is now split into three (3) files:
      • “geo-coordinates-mappingbased”
      • “mappingbased-literals” contains mapping based statements with literal values.
      • “mappingbased-objects”
  • We added a new extractor for citation data.
  • All datasets are available in .ttl and .tql serialization 
  • We are providing DBpedia as a Docker image.
  • From now on, we provide extensive dataset metadata by adding DataIDs for all extracted languages to the respective language directories.
  • In addition, we revamped the dataset table on the download-page. It’s created dynamically based on the DataID of all languages. Likewise, the tables on the statistics- page are now based on files providing information about all mapping languages.
  • From now on, we also include the original Wikipedia dump files(‘pages_articles.xml.bz2’) alongside the extracted datasets.
  • A complete changelog can always be found in the git log.

And what about the numbers?

Altogether the new DBpedia 2015-10 release consists of 8.8 billion (2015-04: 6.9 billion) pieces of information (RDF triples) out of which 1.1 billion (2015-04: 737 million) were extracted from the English edition of Wikipedia, 4.4 billion (2015-04: 3.8 billion) were extracted from other language editions, and 3.2 billion (2015-04: 2.4 billion) came from  DBpedia Commons and Wikidata. In general we observed a significant growth in raw infobox and mapping-based statements of close to 10%.  Thorough statistics are available via the Statistics page.

And what’s up next?

We will be working to move away from the mappings wiki but we will have at least one more mapping sprint. Moreover, we have some cool ideas for GSOC this year. Additional mentors are more than welcome. 🙂

And who is to blame for the new release?

We want to thank all editors that contributed to the DBpedia ontology mappings via the Mappings Wiki, all the GSoC students and mentors working directly or indirectly on the DBpedia release and the whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.

Special thanks go to Markus Freudenberg and Dimitris Kontokostas (University of Leipzig), Volha Bryl (University of Mannheim / Springer), Heiko Paulheim (University of Mannheim), Václav Zeman and the whole LHD team (University of Prague), Marco Fossati (FBK), Alan Meehan (TCD), Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy), Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software), OpenLink Software (http://www.openlinksw.com/), Ruben Verborgh from Ghent University – iMinds, Ali Ismayilov (University of Bonn), Vladimir Alexiev (Ontotext) and members of the DBpedia Association, the AKSW and the department for Business Information Systems of the University of Leipzig for their committment in putting tremendous time and effort to get this done.

The work on the DBpedia 2015-10 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering  (http://aligned-project.eu/).

 

Detailed information about the new release are available here. For more information about DBpedia, please visit our website or follow us on Facebook!

Have fun and all the best!

Yours

DBpedia Association