Category Archives: Support

DBpedia @ GSoC 2017 – Call for ideas & mentors

Dear DBpedians,

As previous years, we would like your input for DBpedia related project ideas for GSoC 2017.

For those who are unfamiliar with GSoC (Google Summer of Code), Google pays students (BSc, MSc, PhD) to work for 3 months on an open source project. Open source organizations announce their student projects and students apply for projects they like. After a selection phase, students are matched with a specific project and a set of mentors to work on the project during the summer.

Here you can see the Google Summer of Code 2017 timeline: https://developers.google.com/open-source/gsoc/timeline

or please check:  http://wiki.dbpedia.org/gsoc2016

If you have a cool idea for DBpedia or want to co-mentor an existing cool idea go here (All mentors get a free Google T-shirt and get the chance to go Google HQs in November.).

DBpedia applied for the fifth time to participate in the Google Summer of Code program. Here you will find a list of all projects and students from GSoC 2016: http://blog.dbpedia.org/2016/04/26/dbpedia-google-summer-of-code-2016/

Check our website for further updates, follow us on #twitter or subscribe to our newsletter.

Looking forward to your input.

Your DBpedia Association

DBpedia in Dutch: formalizing the chapter by signing the Memorandum of Understanding

The DBpedia community and members from over 20 countries work hard to localize and internationalize DBpedia and support the extraction of non-English Wikipedia editions as well as build a data community around a certain language, region or special interest. The chapters are part of the DBpedia executives and have taken on responsibility to contribute to the infrastructure of DBpedia.

Hereby we proudly announce that DBpedia in Dutch is the first chapter which signed the Memorandum of Understanding (MoU). There are various intentions why they already signed the MoU: First of all they support the goals of the DBpedia Association, secondly they strengthen their own chapter and community of contributors and thirdly they improve the cooperation with the Dutch research infrastructure and the Dutch Digital Heritage. The cooperation was initiated by Koninklijke Bibliotheek (National Library of the Netherlands) and Huygens ING (research institute of History and Culture).

director-of-kb-and-director-of-huygens-ing-signing-the-mou
Dr. E.J.B. Lily Knibbeler (director of KB) and Prof. Dr. Lex Heerma van Voss (director of Huygens ING) signing the MoU on 12th September 2016 in The Hague.

Other partners like imec/Ghent University and Institute of Sound and Vision have signed as well and became an executive partner of the DBpedia Association. The Vrije Universiteit will join soon. It is a cooperation between these Dutch organizations as well as the NL-DBpedia community.

The Dutch Chapter has provided a Sample DBpedia Chapter Memorandum of Understanding (MoU) to use as a template for further chapters. If you use DBpedia and want us to keep going forward, we kindly invite you to donate and help DBpedia to grow. If you would like to become a member of the DBpedia Association, please go directly to the application form or contact us.

Check our website for further updates, stay tuned and follow us on Twitter.

Your DBpedia Association

YEAH! We did it again ;) – New 2016-04 DBpedia release

Hereby we announce the release of DBpedia 2016-04. The new release is based on updated Wikipedia dumps dating from March/April 2016 featuring a significantly expanded base of information as well as richer and (hopefully) cleaner data based on the DBpedia ontology.

You can download the new DBpedia datasets in a variety of RDF-document formats from: http://wiki.dbpedia.org/downloads-2016-04 or directly here: http://downloads.dbpedia.org/2016-04/

Support DBpedia

During the latest DBpedia meeting in Leipzig we discussed about ways to support DBpedia and what benefits this support would bring. For the next two months, we are aiming to raise money to support the hosting of the main services and the next DBpedia release (especially to shorten release intervals). On top of that we need to buy a new server to host DBpedia Spotlight that was so generously hosted so far by third parties. If you use DBpedia and want us to keep going forward, we kindly invite you to donate here or become a member of the DBpedia association.

Statistics

The English version of the DBpedia knowledge base currently describes 6.0M entities of which 4.6M have abstracts, 1.53M have geo coordinates and 1.6M depictions. In total, 5.2M resources are classified in a consistent ontology, consisting of 1.5M persons, 810K places (including 505K populated places), 490K works (including 135K music albums, 106K films and 20K video games), 275K organizations (including 67K companies and 53K educational institutions), 301K species and 5K diseases. The total number of resources in English DBpedia is 16.9M that, besides the 6.0M resources, includes 1.7M skos concepts (categories), 7.3M redirect pages, 260K disambiguation pages and 1.7M intermediate nodes.

Altogether the DBpedia 2016-04 release consists of 9.5 billion (2015-10: 8.8 billion) pieces of information (RDF triples) out of which 1.3 billion (2015-10: 1.1 billion) were extracted from the English edition of Wikipedia, 5.0 billion (2015-04: 4.4 billion) were extracted from other language editions and 3.2 billion (2015-10: 3.2 billion) from  DBpedia Commons and Wikidata. In general, we observed a growth in mapping-based statements of about 2%.

Thorough statistics can be found on the DBpedia website and general information on the DBpedia datasets here.

Community

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2016-04 ontology encompasses:

  • 754 classes (DBpedia 2015-10: 739)
  • 1,103 object properties (DBpedia 2015-10: 1,099)
  • 1,608 datatype properties (DBpedia 2015-10: 1,596)
  • 132 specialized datatype properties (DBpedia 2015-10: 132)
  • 410 owl:equivalentClass and 221 owl:equivalentProperty mappings external vocabularies (DBpedia 2015-04: 407 – 221)

The editor community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2016-04 extraction, we used a total of 5800 template mappings (DBpedia 2015-10: 5553 mappings). For the second time the top language, gauged by the number of mappings, is Dutch (646 mappings), followed by the English community (604 mappings).

(Breaking) Changes

  • In addition to normalized datasets to English DBpedia (en-uris) we additionally provide normalized datasets based on the DBpedia Wikidata (DBw) datasets (wkd-uris). These sorted datasets will be the foundation for the upcoming fusion process with wikidata. The DBw-based uris will be the only ones provided from the following releases on.
  • We now filter out triples from the Raw Infobox Extractor that are already mapped. E.g. no more “<x> dbo:birthPlace <z>” and “<x> dbp:birthPlace|dbp:placeOfBirth|… <z>” in the same resource. These triples are now moved to the “infobox-properties-mapped” datasets and not loaded on the main endpoint. See issue 22 for more details.
  • Major improvements in our citation extraction. See here for more details.
  • We incorporated the statistical distribution approach of Heiko Paulheim in creating type statements automatically and providing them as an additional datasets (instance_types_sdtyped_dbo).

In case you missed it, what we changed in the previous release (2015-10):

  • English DBpedia switched to IRIs. This can be a breaking change to some applications that need to change their stored DBpedia resource URIs / links. We provide the “uri-same-as-iri” dataset for English to ease the transition.
  • The instance-types dataset is now split into two files: instance-types (containing only direct types) and instance-types-transitive containing the transitive types of a resource based on the DBpedia ontology
  • The mappingbased-properties file is now split into three (3) files:
    • “geo-coordinates-mappingbased” that contains the coordinated originating from the mappings wiki. the “geo-coordinates” continues to provide the coordinates originating from the GeoExtractor
    • “mappingbased-literals” that contains mapping based fact with literal values
    • “mappingbased-objects” that contains mapping based fact with object values
    • the “mappingbased-objects-disjoint-[domain|range]” are facts that are filtered out from the “mappingbased-objects” datasets as errors but are still provided
  • We added a new extractor for citation data that provides two files:
    • citation links: linking resources to citations
    • citation data: trying to get additional data from citations. This is a quite interesting dataset but we need help to clean it up
  • All datasets are available in .ttl and .tql serialization (nt, nq dataset were neglected for reasons of redundancy and server capacity).

Upcoming Changes

  • Dataset normalization: We are going to normalize datasets based on wikidata uris and no longer on the English language edition, as a prerequisite to finally start the fusion process with wikidata.
  • RML Integration: Wouter Maroy did already provide the necessary groundwork for switching the mappings wiki to a RML based approach on Github. We are not there yet but this is at the top of our list of changes.
  • Starting with the next release we are adding datasets with NIF annotations of the abstracts (as we already provided those for the 2015-04 release). We will eventually extend the NIF annotation dataset to cover the whole Wikipedia article of a resource.

New Datasets

  • SDTypes: We extended the coverage of the automatically created type statements (instance_types_sdtyped_dbo) to English, German and Dutch (see above).
  • Extensions: In the extension folder (2016-04/ext) we provide two new datasets, both are to be considered in an experimental state:
    • DBpedia World Facts: This dataset is authored by the DBpedia association itself. It lists all countries, all currencies in use and (most) languages spoken in the world as well as how these concepts relate to each other (spoken in, primary language etc.) and useful properties like iso codes (ontology diagram). This Dataset extends the very useful LEXVO dataset with facts from DBpedia and the CIA Factbook. Please report any error or suggestions in regard to this dataset to Markus.
    • Lector Facts: This experimental dataset was provided by Matteo Cannaviccio and demonstrates his approach to generating facts by using common sequences of words (i.e. phrases) that are frequently used to describe instances of binary relations in a text. We are looking into using this approach as a regular extraction step. It would be helpful to get some feedback from you.

Credits

Lots of thanks to

  • Markus Freudenberg (University of Leipzig / DBpedia Association) for taking over the whole release process and creating the revamped download & statistics pages.
  • Dimitris Kontokostas (University of Leipzig / DBpedia Association) for conveying his considerable knowledge of the extraction and release process.
  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • The whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.
  • Heiko Paulheim (University of Mannheim) for providing the necessary code for his algorithm to generate additional type statements for formerly untyped resources and identify and removed wrong statements. Which is now part of the DIEF.
  • Václav Zeman, Thomas Klieger and the whole LHD team (University of Prague) for their contribution of additional DBpedia types
  • Marco Fossati (FBK) for contributing the DBTax types
  • Alan Meehan (TCD) for performing a big external link cleanup
  • Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy) for providing the links from DOLCE to DBpedia ontology.
  • Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that provides 5-Star Linked Open Data publication and SPARQL Query Services.
  • OpenLink Software (http://www.openlinksw.com/) collectively for providing the SPARQL Query Services and Linked Open Data publishing  infrastructure for DBpedia in addition to their continuous infrastructure support.
  • Ruben Verborgh from Ghent University – iMinds for publishing the dataset as Triple Pattern Fragments, and iMinds for sponsoring DBpedia’s Triple Pattern Fragments server.
  • Ali Ismayilov (University of Bonn) for extending the DBpedia Wikidata dataset.
  • Vladimir Alexiev (Ontotext) for leading a successful mapping and ontology clean up effort.
  • All the GSoC students and mentors which directly or indirectly influenced the DBpedia release
  • Special thanks to members of the DBpedia Association, the AKSW and the department for Business Information Systems of the University of Leipzig.

The work on the DBpedia 2016-04 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering  (http://aligned-project.eu/). More information about DBpedia is found at http://dbpedia.org as well as in the new overview article about the project available at http://wiki.dbpedia.org/Publications.

Have fun with the new DBpedia 2016-04 release!

For more information about DBpedia, please visit our website or follow us on facebook!
Your DBpedia Association

More than 150 DBpedia enthusiasts @ the 7th Community Meeting in Leipzig

After the success of the last two community meetings in Palo Alto and in The Hague we thought it is time to meet in Leipzig, where the DBpedia Association is located. During the SEMANTiCS 2016 in Leipzig, Sep 12-15, the DBpedia community met on the 15th of September. First and foremost, we would like to thank the Institute for Applied Informatics for supporting our community, the University of Leipzig for hosting our meeting and many thanks to the SEMANTiCS for hosting and sponsoring the meeting.

Opening Session

DBpedia Team together with Lydia Pintscher
DBpedia Team and Lydia Pintscher (Wikidata)

During the opening session, Lydia Pintscher, product manager of Wikidata, presented Wikidata: bringing structured data to Wikipedia with 16000 volunteers. Lydia described similarities and varieties between DBpedia and Wikidata and she talked about prospective steps for Wikidata. Harald Sack from the Hasso-Plattner-Institut spoke during the opening session, too. He introduced the dwerft Project – DBpedia and Linked Data for the Media Value Chaintopics which aims the common technology platform »Linked Production Data Cloud«.

Showcase Session

The DBpedia showcase session started with the DBpedia 2016-04 release update by Markus Freudenberg (AKSW/KILT). At this session, six speakers presented how to utilize DBpedia in novel and interesting ways. For example:

  • Miel Vander Sande (iMinds) talked about DBpedia Archives as Memento with Triple Pattern Fragments.
  • Jörn Hees (DFKI) introduced us to Human associations in the Semantic Web and DBpedia.
  • Peter de Laat from GoUnitive urged the community to personalize user interaction in a Linked Data environment.

DBpedia Association hour

The 7th edition of the community meeting covered the first DBpedia Association hdbpedia_board-of-trusteesour, which provided a platform for the community to discuss and give feedback. Sebastian Hellmann (AKSW, KILT), Julia Holze (DBpedia Association) and Dimitris Kontokostas (AKSW, KILT) gave an update on the DBpedia Association status. We talked about our technical progress, DBpedia funding and visions. Sebastian Hellmann introduced the Board of Trustees, which is the main decision-making body of the DBpedia Association and oversees the association and its work as its ultimate corporate authority.

Enno Meijers (KB) of the Dutch DBpedia chapter announced a successful cooperation between Huygens ING, iMinds/Univ. Gent, Vrije Universiteit Amsterdam, Institute for Sound and Vision, Koninklijke Bibliotheek (KB) and the NL-DBpedia community. By signing the Manifest of Understanding (MoU) they support the goals of the DBpedia Association officially and strengthen the Dutch chapter and community.

You will find community feedback and all questions which we discussed at the first DBpedia Association hour here: https://pad.okfn.org/p/how-to-improve-DBpedia. Participants who wanted to learn DBpedia basics joined the DBpedia tutorial session by Markus Freudenberg (AKSW/KILT).

Afternoon Track

The sessions in the afternoon highlighted two important fields of research and development, namely DBpedia ontology and DBpedia & NLP. At the DBpedia ontology session, Wouter Maroy (iMinds) presented DBpedia RML mappings, which he created during this year’s Google Summer of Code project and Gerard Kuys (Ordina) discussed the question ‘Does extraction prelude structure?’ with the DBpedia ontology group. At the same time, Milan Dojchinovski (AKSW/KILT) chaired the DBpedia & NLP session with eight very interesting talks. You will find all presentations given during this session on our website. The last two presentations Analyzing and improving the Polish Wikipedia Citations (part of the Wikipedia References & Citations challenge) and Greek DBpedia updates were given by Krzysztof Węcel (Poznan University) and Sotiris Karampatakis (OKF Greece).

On the closing session we wrapped up the meeting and gave out our prizes to:

  • The “DBpedia Excellence in Engineering” went to Markus Freudenberg for keeping up with the DBpedia releases
  • The “Citations Challenge prize” went to Krzysztof Węcel for his very thorough citation analysis.

pokal_dbpediaAll slides and presentations are also available on our Website and you will find more feedback and photos about the event on Twitter via #DBpediaLeipzig2016.

Summing up, the event brought together more than 150 DBpedians from Europe which engaged in vital conversations about interesting projects and approaches to questions/problems revolving around DBpedia. We would like to thank the organizers Magnus Knuth (HPI, DBpedia German & Commons), Monika Solanki (University of Oxford) and representatives of the DBpedia Association such as Dimitris Kontokostas, Sebastian Hellmann and Julia Holze for devoting their time to the organization of the meeting and the program.

We are now looking forward to the 8th DBpedia Community Meeting (which most probably coming sooner than you think across the Atlantic). Check our website for further updates or follow #DBpedia on twitter.

Your DBpedia Association.

Leipzig is calling for the next DBpedia Community Meeting.

During the SEMANTiCS 2016 in Leipzig, Sep 12-15, the DBpedia community will get together on the 15th of September for the 7th edition of the DBpedia Community Meeting. The meeting will take place at the University of Leipzig (Augustusplatz 10, 04109 Leipzig, Germany). See here for detailed directions.

Over 140 participants registered for the next DBpedia Community Meeting, only few seats are left. So come and get your ticket to be part of this event.

The 7th edition of this event covers the first DBpedia Association hour, which provide a platform for the community to discuss and give feedback. On top we will have a DBpedia showcase session on DBpedia+ Data Stack 2016-04 – Release and talks about Human associations in the Semantic Web and DBpedia, DBpedia Archives as Memento with Triple Pattern Fragments and Towards a Unified PageRank for DBpedia and Wikidata. Our event features a dev & tutorial session to learn about DBpedia as well as a DBpedia ontology session and a DBpedia & NLP session.

Lydia Pintscher, product manager of Wikidata will speak about Wikidata: bringing structured data to Wikipedia with 16000 volunteers and Harald Sack from the Hasso-Plattner-Institut will speak about the dwerft Project – DBpedia and Linked Data for the Media Value Chaintopics. At the end of the meeting there will be a session for the “DBpedia references and citations challenge”, submissions will be judged by the Organizing Committee and the best two will receive a prize.

Attending the DBpedia Community meeting is free, but you need to register here. Optionally, in case you like to support DBpedia with a little more than your presence during the event, you can choose a DBpedia support ticket. Have a look here:

https://event.gg/3396-7th-dbpedia-community-meeting-in-leipzig-2016
We would like to thank the following organizations for sponsoring and supporting our endeavour.

Check our website for further updates and like us on Facebook.

Your DBpedia Association

DBpedia citations & references challenge

In the latest release (2015-10) DBpedia started exploring the citation and reference data from Wikipedia and we were pleasantly surprised by the rich data we managed to extract.

This data holds huge potential, especially for the Wikidata challenge of providing a reference source for every statement. It describes not only a lot of bibliographical data, but also a lot of web pages and many other sources around the web.

The data we extract at the moment is quite raw and can be improved in many different ways. Some of the potential improvements are:

We welcome contributions that improve the existing citation dataset in any way; and we are open to collaboration and helping. Results will be presented at the next DBpedia meeting: 15 September 2016 in Leipzig, co-located with SEMANTiCS 2016. Each participant should submit a short description of his/her contribution by Monday 12 September 2016 and present his/her work at the meeting. Comments, questions can be posted on the DBpedia discussion & developer lists or in our new DBpedia ideas page.

Submissions will be judged by the Organizing Committee and the best two will receive a prize.

Organizing Committee

Stay tuned and follow us on facebook, twitter or visit our website for the latest news.

Your DBpedia Association

DBpedia @ BIS 2016

DBpedia will be part of the 19th International Conference on Business Information Systems (6-8 July 2016) at the University of Leipzig. The conference addresses a wide scientific community and experts involved in the development of business computing applications.The three-day conference program is a mix of workshops, tutorials and paper sessions. Following, you will find more information about the DBpedia tutorial:

When?

Wednesday, July 6th, 2016

What?

DBpedia Tutorial on Semantic Knowledge Integration in established Data (IT) Environments

Enriching data with a semantic layer and linking entities is key to what is loosely called Smart Data. An easy, yet comprehensive way of achieving this is the use of Linked Data standards.

In this DBpedia tutorial, we will introduce

  • the basic ideas of Linked Data and other Semantic Web standards
  • existing open datasets that can be freely reused (including DBpedia of course)
  • software and services in the DBpedia infrastructure such as the DBpedia SPARQL service, the lookup service and the DBpedia Spotlight Entity Linking service
  • common business use cases that will help to apply the learned lessons into practice
  • integration example into a hypothetical environment

In particular, we would like  to show how to seamlessly integrate Linked Data technologies into existing IT- and data-environments and discuss how to link private corporate data knowledge graphs to DBpedia and Linked Open Data. Another special focus is on finding links in text and unstructured data.

Interesting links:

Duration

2 x 90 minutes (half day)

Target audience

  • Practitioners that would like to learn about linked data and take home the know-how to apply it in their organisation
  • Researchers and students that would like to use linked data in their research

Who?

The tutorial is held by core members of the DBpedia Association and members of the AKSW/KILT research group in the context of three large research projects:

Markus Freudenberg (main contact: freudenberg@informatik.uni-leipzig.de), Markus Ackermann, Kay Müller, Sebastian Hellmann, AKSW/KILT, Leipzig University

Stay tuned and follow us on facebook, twitter or visit our website for the latest news.

Your DBpedia Association

DBpedia @ Google Summer of Code 2016

DBpedia participated for a fourth time in the Google summer of code program. This was a quite competitive year (like every year) where more than fourty students applied for a DBpedia project. In the end,  8 great students from all around the world  were selected and will work on their projects during the summer. Here’s a detailed list of the projects:

A Hybrid Classifier/Rule-based Event Extractor for DBpedia Proposal by Vincent Bohlen

In modern times the amount of information published on the internet is growing to an immeasurable extent. Humans are no longer able to gather all the available information by hand but are more and more dependent on machines collecting relevant information automatically. This is why automatic information extraction and in especially automatic event extraction is important. In this project I will implement a system for event extraction using Classification and Rule-based Event Extraction. The underlying data for both approaches will be identical. I will gather wikipedia articles and perform a variety of NLP tasks on the extracted texts. First I will annotate the named entities in the text using named entity recognition performed by DBpedia Spotlight. Additionally I will annotate the text with Frame Semantics using FrameNet frames. I will then use the collected information, i.e. frames, entities, entity types, with the aforementioned two different methods to decide if the collection is an event or not. Mentor: Marco Fossati (SpazioDati)

Automatic mappings extraction by Aditya Nambiar

DBpedia currently maintains a mapping between Wikipedia info-box properties to the DBpedia ontology, since several similar templates exist to describe the same type of info-boxes. The aim of the project is to enrich the existing mapping and possibly correct the incorrect mapping’s using Wikidata.

Several wikipedia pages use Wikidata values directly in their infoboxes. Hence by using the mapping between Wikidata properties and DBpedia Ontology classes along with the info-box data across several such wiki pages we can collect several such mappings. The first phase of the project revolves around using various such wikipedia templates , finding their usages across the wikipedia pages and extracting as many mappings as possible.

In the second half of the project we use machine learning techniques to take care of any accidental / outlier usage of Wikidata mappings in wikipedia. At the end of the project we will be able to obtain a correct set of mapping which we can use to enrich the existing mapping. Mentor: Markus Freudenberg (AKSW/KILT)

Combining DBpedia and Topic Modelling by wojtuch

DBpedia, a crowd- and open-sourced community project extracting the content from Wikipedia, stores this information in a huge RDF graph. DBpedia Spotlight is a tool which delivers the DBpedia resources that are being mentioned in the document.

Using DBpedia Spotlight to extract Named Entities from Wikipedia articles and then applying a topic modelling algorithm (e.g. LDA) with URIs of DBpedia resources as features would result in a model, which is capable of describing the documents with the proportions of the topics covering them. But because the topics are also represented by DBpedia URIs, this approach could result in a novel RDF hierarchy and ontology with insights for further analysis of the emerged subgraphs.

The direct implication and first application scenario for this project would be utilizing the inference engine in DBpedia Spotlight, as an additional step after the document has been annotated and predicting its topic coverage. Mentor: Alexandru Todor (FU Berlin)

DBpedia Lookup Improvements by Kunal.Jha

DBpedia is one of the most extensive and most widely used knowledge base in over 125 languages. DBpedia Lookup is a tool that allows The DBpedia Lookup is a web service that allows users to obtain various DBpedia URIs for a given label (keywords/anchor texts). The service provides two different types of search APIs, namely, Keyword Search and Prefix Search. The lookup service currently returns the query results in XML (default) and JSON formats and works on English language. It is based on a Lucene Index providing a weighted label lookup, which combines string similarity with a relevance ranking in order to find the most relevant matches for a given label. As a part of the GSOC 2016, I propose to implement improvisations with an intention to make the system more efficient and versatile. Mentor: Axel Ngonga (AKSW)

Inferring infobox template class mappings from Wikipedia + Wikidata by Peng_Xu

This project aims at finding mappings between the classes (eg. dbo:Person, dbo:City) in the DBpedia ontology and infobox templates on pages of Wikipedia resources using machine learning. Mentor: Nilesh Chakraborty (University of Bonn)

Integrating RML in the Dbpedia extraction framework by wmaroy

This project is about integrating RML in the Dbpedia extraction framework. Dbpedia is derived from Wikipedia infoboxes using the extraction framework and mappings defined using the wikitext syntax. A next step would be replacing the wikitext defined mappings with RML. To accomplish this, adjustments will have to be made to the extraction framework. Mentor: Dimitris Kontokostas (AKSW/KILT)

The List Extractor by FedBai

The project focuses on the extraction of relevant but hidden data which lies inside lists in Wikipedia pages. The information is unstructured and thus cannot be easily used to form semantic statements and be integrated in the DBpedia ontology. Hence, the main task consists in creating a tool which can take one or more Wikipedia pages with lists within as an input and then construct appropriate mappings to be inserted in a DBpedia dataset. The extractor must prove to work well on a given domain and to have the ability to be expanded to reach generalization. Mentor: Marco Fossati (SpazioDati)

The Table Extractor by s.papalini

Wikipedia is full of data hidden in tables. The aim of this project is to exploring the possibilities of take advantage of all the data represented with the appearance of tables in Wiki pages, in order to populate the different versions of DBpedia with new data of interest. The Table Extractor has to be the engine of this data “revolution”: it would achieve the final purpose of extract the semi structured data from all those tables now scattered in most of the Wiki pages. Mentor: Marco Fossati (SpazioDati)

At the begining of September 2016 you will receive news about successfull Google Summer of Code 2016 student projects. Stay tuned and follow us on  facebook, twitter or visit our website for the latest news.

 
Your DBpedia Association

6th DBpedia Community Meeting in The Hague 2016

3 more days to go…

until we finally meet again for our next DBpedia Community Meeting, which is hosted by the National Library of the Netherlands in the Hague on February 12th. One day before we will have a welcome reception (5-8pm) with snacks and drinks at TNO – New Babylon.

Only 15 seats are left for the next DBpedia Community Meeting, so come and get your ticket to be part of this event.

The 6th edition of this event covers a discussion about the Dutch DBpedia becoming the first chapter with institutional support of the new DBpedia as well as a session on the DBpedia ontology by members of the newly found DBpedia working group. On top we will have a DBpedia showcase session on DBpedia+ Data Stack 2015-10 – Release, Quality control in DBpedia as well as presentations about the LIDER and Goose project. And as usual,  our event features a dev and tutorial session to learn about DBpedia.

Experts in the field of semantic technologies from Elsevier and the dutch Land Registry and Mapping Agency, as well as the Europeana project and the DEN foundation will speak about topics such as Digital Heritage in the Netherlands and Knowledge Graph Construction and the Role of DBpedia.

Attending the DBpedia Community meeting is free, but you need to register here. Optionally, in case you like to support DBpedia with a little more than your presence during the event, you can choose a DBpedia support ticket. Have a look here:

https://event.gg/2245-6th-dbpedia-meeting-in-the-hague-2016 .

We would like to thank the following organizations for sponsoring and supporting our endeavour.

Check our website for further updates and like us on Facebook.

GSoC 2015 is gone, long live GSoC 2016

The submission deadline for mentoring organizations to submit their application for the 2016 Google Summer of Code is approaching quickly. As DBpedia is again planning to be a vital part of the Mentoring Summit, we like to take that opportunity to  give you a little recap of the projects mentored by DBpedia members during the past GSoC, in November 2015. 

Dimitris Kontokostas, Marco Fossati, Thiago Galery, Joachim Daiber and Reuben Verborgh, members of the Dbpedia community, mentored 8 great students from around the world. Following are some of the projects they completed.

Fact Extraction from Wikipedia Text by Emilio Dorigatti

DBpedia is pretty much mature when dealing with Wikipedia semi-structured content like infoboxes, links and categories. However, unstructured content (typically text) plays the most crucial role, due to the amount of knowledge it can deliver, and few efforts have been carried out to extract structured data out of it. Marco and Emilio built a fact extractor, which understands the semantics of a sentence thanks to Natural Language Processing (NLP) techniques. If you feel playful, you can download the produced datasetsFor more details, check out this blog postP.S.: the project has been cited by Python Weekly and Python TrendingMentor: Marco Fossati (SpazioDati)

Better context vectors for disambiguation by Philipp Dowling

Better Context Vectors  aimed to improve the representation of context used by DBpedia Spotlight by incorporating novel methods from distributional semantics. We investigated the benefits of replacing a word-count based method for one that uses a model based on word2vec. Our student, Philipp Dowling, implemented the model reader based on a preprocessed version of Wikipedia (leading to a few commits to the awesome library gensim) and the integration with the main DBpedia Spotlight pipeline. Additionally, we integrated a method for estimating weights for the different model components that contribute to disambiguating entities. Mentors: Thiago Galery (Analytyca), Joachim Daiber (Amsterdam Univ.), David Przybilla (Idio)

 

Wikipedia Stats Extractor by Naveen Madhire

Wikipedia Stats Extractor aimed to create a reusable tool to extract raw statistics for Name Entity Linking out of a Wikipedia dump. Naveen built the project on top of Apache Spark and Json-wikipedia which makes the code more maintainable and faster than its previous alternative (pignlproc). Furthermore Wikipedia Stats Extractor provides an interface which makes easier the task of processing Wikipedia dumps for  purposes other than Entity Linking. Extra changes were made in the way surface forms stats are extracted  and lots of noise was removed, both of which should in principle help Entity Linking.
Special regards to Diego Ceccarelli who gave us great insight on how Json-wikipedia worked. Mentors: Thiago Galery (Analytyca), Joachim Daiber (Amsterdam Univ.), David Przybilla (Idio)

 

DBpedia Live extensions by Andre Pereira

DBpedia Live provides near real-time knowledge extraction from Wikipedia. As wikipedia scales we needed to move our caching infrastructure from MySQL to MongoDB. This was the first task of Andre’s project. The second task was the implementation of a UI displaying the current status of DBpedia Live along with some admin utils. Mentors: Dimitris Kontokostas (AKSW/KILT), Magnus Knuth (HPI)

 

Adding live-ness to the Triple Pattern Fragments server by Pablo Estrada

DBpedia currently has a highly available Triple Pattern Fragments interface that offloads part of the query processing from the server into the clients. For this GSoC, Pablo developed a new feature for this server so it automatically keeps itself up to date with new data coming from DBpedia Live. We do this by periodically checking for updates, and adding them to an auxiliary database. Pablo developed smart update, and smart querying algorithms to manage and serve the live data efficiently. We are excited to let the project out in the wild, and see how it performs in real-life use cases. Mentors: Ruben Verborgh (Ghent Univ. – iMinds) and Dimitris Kontokostas (AKSW/KILT)

Registration for mentors @ GSoC 2016 is starting next month and DBpedia would of course try to participate again. If you want to become a mentor or just have a cool idea that seems suitable, don’t hesitate to ping us via the DBpedia discussion or developer mailing lists.

Stay tuned!

Your DBpedia Association