Tag Archives: DBpedia

GlobalFactSync and WikiDataCon2019

We will be spending the next three days in Berlin at WikidataCon 2019, the conference for open data enthusiasts. From October 24th till 26th we will be presenting the latest developments and first results of our work in the GlobalFactSyncRE-Project. 

Short Project Intro

Funded by the Wikimedia Foundation, the project started in June 2019 and has two goals:

  • Answer the following questions:
    • How is data edited in Wikipedia and Wikidata?
    • Where does it come from?
    • How can we synchronize it globally?
  • Build an information system to synchronize facts between all Wikipedia language-editions, Wikidata, DBpedia and eventually multiple external sources, while also providing respective references. 

In order to help Wikipedians to maintain their infoboxes, check for factual correctness, and also improve data in Wikidata, we use data from Wikipedia infoboxes of different languages, Wikidata, and DBpedia and fuse them into our PreFusion dataset (in JSON-LD). More information on the fusion process, which is the engine behind GFS, can be found in the FlexiFusion paper.

Can’t join the conference or want to find out more about GlobalFactSync?

No problem, the poster we are presenting at the conference is currently available here and will soon be available here. Additionally, why not go through our project timeline, follow up on our progress so far and find out what’s coming up next.

In case you have specific questions regarding GlobalfactSync or even some helpful feedback just ping us via dbpedia@infai.org. We also have our new DBpedia Forum, home to the DBpedia Comunity, which just waits for you to initialize a discussion around GlobalFactSync. Why not start it now?

For general DBpedia news and updates follow us on Twitter.

…And if you are in Berlin at WikiDataCon2019 stop by our poster and talk to our developers. They are looking forward to vital exchanges with you.

All the best

yours,


DBpedia Association


SEMANTiCS 2019 Interview: Katja Hose

Today’s post features an interview with our DBpedia Day keynote speaker Katja Hose, a Professor of Computer Science at Aalborg University, Denmark. In this Interview, Katja talks about increasing the reliability of Knowledge Graph Access as well as her expectations for SEMANTiCS 2019

Prior to joining Aalborg University, Katja was a postdoc at the Max Planck Institute for Informatics in Saarbrücken. She received her doctoral degree in Computer Science from Ilmenau University of Technology in Germany.

Can you tell us something about your research focus?

The most important focus of my research has been querying the Web of Data, in particular, efficient query processing over distributed knowledge graphs and Linked Data. This includes indexing, source selection, and efficient query execution. Unfortunately, it happens all too often that the services needed to access remote knowledge graphs are temporarily not available, for instance, because a software component crashed. Hence, we are currently developing a decentralized architecture for knowledge sharing that will make access to knowledge graphs a reliable service, which I believe is the key to a wider acceptance and usage of this technology.

How do you personally contribute to the advancement of semantic technologies?

I contribute by doing research, advancing the state of the art, and applying semantic technologies to practical use cases.  The most important achievements so far have been our works on indexing and federated query processing, and we have only recently published our first work on a decentralized architecture for sharing and querying semantic data. I have also been using semantic technologies in other contexts, such as data warehousing, fact-checking, sustainability assessment, and rule mining over knowledge bases.

Overall, I believe the greatest ideas and advancements come when trying to apply semantic technologies to real-world use cases and problems, and that is what I will keep on doing.

Which trends and challenges do you see for linked data and the semantic web?

The goal and the idea behind Linked Data and the Semantic Web is the second-best invention after the Internet. But unlike the Internet, Linked Data and the Semantic Web are only slowly being adopted by a broader community and by industry.

I think part of the reason is that from a company’s point of view, there are not many incentives and added benefit of broadly sharing the achievements. Some companies are simply reluctant to openly share their results and experiences in the hope of retaining an advantage over their competitors. I believe that if these success stories were shared more openly, and this is the trend we are witnessing right now, more companies will see the potential for their own problems and find new exciting use cases.

Another particular challenge, which we will have to overcome, is that it is currently still far too difficult to obtain and maintain an overview of what data is available and formulate a query as a non-expert in SPARQL and the particular domain… and of course, there is the challenge that accessing these datasets is not always reliable.

As artificial intelligence becomes more and more important, what is your vision of AI?

AI and machine learning are indeed becoming more and more important. I do believe that these technologies will bring us a huge step ahead. The process has already begun. But we also need to be aware that we are currently in the middle of a big hype where everybody wants to use AI and machine learning – although many people actually do not truly understand what it is and if it is actually the best solution to their problems. It reminds me a bit of the old saying “if the only tool you have is a hammer, then every problem looks like a nail”. Only time will tell us which problems truly require machine learning, and I am very curious to find out which solutions will prevail.

However, the current state of the art is still very far away from the AI systems that we all know from Science Fiction. Existing systems operate like black boxes on well-defined problems and lack true intelligence and understanding of the meaning of the data. I believe that the key to making these systems trustworthy and truly intelligent will be their ability to explain their decisions and their interpretation of the data in a transparent way.

What are your expectations about Semantics 2019 in Karlsruhe?

First and foremost, I am looking forward to meeting a broad range of people interested in semantic technologies. In particular, I would like to get in touch with industry-based research and to be exposed 

The End

We like to thank Katje Hose for her insights and are happy to have her as one of our keynote speakers.

Visit SEMANTiCS 2019 in Karlsruhe, Sep 9-12 and get your tickets for our community meeting here. We are looking forward to meeting you during DBpedia Day.

Yours DBpedia Association

Global Fact Sync – Synchronizing Wikidata & Wikipedia’s infoboxes

How is data edited in Wikipedia/Wikidata? Where does it come from? And how can we synchronize it globally?  

The GlobalFactSync (GFS) Project — funded by the Wikimedia Foundation — started in June 2019 and has two goals:

  • Answer the above-mentioned three questions.
  • Build an information system to synchronize facts between all Wikipedia language-editions and Wikidata. 

Now we are seven weeks into the project (10+ more months to go) and we are releasing our first prototypes to gather feedback. 

How – Synchronization vs Consensus

We follow an absolute “Human(s)-in-the-loop” approach when we talk about synchronization. The final decision whether to synchronize a value or not should rest with a human editor who understands consensus and the implications. There will be no automatic imports. Our focus is to drastically reduce the time to research all references for individual facts.

A trivial example to illustrate our reasoning is the release date of the single “Boys Don’t Cry” (March 16th, 1989) in the English, Japanese, and French Wikipedia, Wikidata and finally in the external open database MusicBrainz.  A human editor might need 15-30 minutes finding and opening all different sources, while our current prototype can spot differences and display them in 5 seconds.

We already had our first successful edit where a Wikipedia editor fixed the discrepancy with our prototype: “I’ve updated Wikidata so that all five sources are in agreement.” We are now working on the following tasks:

  • Scaling the system to all infoboxes, Wikidata and selected external databases (see below on the difficulties there)
  • Making the system:
    •  “live” without stale information
    • “reliable” with less technical errors when extracting and indexing data
    • “better referenced” by not only synchronizing facts but also references 

Contributions and Feedback

To ensure that GlobalFactSync will serve and help the Wikiverse we encourage everyone to try our data and micro-services and leave us some feedback, either on our Meta-Wiki page or via email. In the following 10+ months, we intend to improve and build upon these initial results. At the same time, these microservices are available to every developer to exploit it and hack useful applications. The most promising contributions will be rewarded and receive the book “Engineering Agile Big-Data Systems”. Please post feedback or any tool or GUI here. In case you need changes to be made to the API, please let us know, too.
For the ambitious future developers among you, we have some budget left that we will dedicate to an internship. In order to apply, just mention it in your feedback post. 

Finally, to talk to us and other GlobalfactSync-Users you may want to visit WikidataCon and Wikimania, where we will present the latest developments and the progress of our project. 

Data, APIs & Microservices (Technical prototypes) 

Data Processing and Infobox Extraction

For GlobalFactSync we use data from Wikipedia infoboxes of different languages, as well as Wikidata, and DBpedia and fuse them to receive one big, consolidated dataset – a PreFusion dataset (in JSON-LD). More information on the fusion process, which is the engine behind GFS, can be found in the FlexiFusion paper. One of our next steps is to integrate MusicBrainz into this process as an external dataset. We hope to implement even more such external datasets to increase the amount of available information and references. 

First microservices 

We deployed a set of microservices to show the current state of our toolchain.

  • [Initial User Interface] The GlobalFactSync UI prototype (available at http://global.dbpedia.org) shows all extracted information available for one entity for different sources. It can be used to analyze the factual consensus between different Wikipedia articles for the same thing. Example: Look at the variety of population counts for Grimma.
  • [Reference Data Download] We ran the Reference Extraction Service over 10 Wikipedia languages. Download dumps here.
  • [ID service] Last but not least, we offer the Global ID Resolution Service. It ties together all available identifiers for one thing (i.e. at the moment all DBpedia/Wikipedia and Wikidata identifiers – MusicBrainz coming soon…) and shows their stable DBpedia Global ID. 

Finding sync targets

In order to test out our algorithms, we started by looking at various groups of subjects, our so-called sync targets. Based on the different subjects a set of problems were identified with varying layers of complexity:

  • identity check/check for ambiguity — Are we talking about the same entity? 
  • fixed vs. varying property — Some properties vary depending on nationality (e.g., release dates), or point in time (e.g., population count).
  • reference — Depending on the entity’s identity check and the property’s fixed or varying state the reference might vary. Also, for some targets, no query-able online reference might be available.
  • normalization/conversion of values — Depending on language/nationality of the article properties can have varying units (e.g., currency, metric vs imperial system).

The check for ambiguity is the most crucial step to ensure that the infoboxes that are being compared do refer to the same entity. We found, instances where the Wikipedia page and the infobox shown on that page were presenting information about different subjects (e.g., see here).

Examples

As a good sync target to start with the group ‘NBA players’ was identified. There are no ambiguity issues, it is a clearly defined group of persons, and the amount of varying properties is very limited. Information seems to be derived from mainly two web sites (nba.com and basketball-reference.com) and normalization is only a minor issue. ‘Video games’ also proved to be an easy sync target, with the main problem being varying properties such as different release dates for different platforms (Microsoft Windows, Linux, MacOS X, XBox) and different regions (NA vs EU).

More difficult topics, such as ‘cars’, ’music albums’, and ‘music singles’ showed more potential for ambiguity as well as property variability. A major concern we found was Wikipedia pages that contain multiple infoboxes (often seen for pages referring to a certain type of car, such as this one). Reference and fact extraction can be done for each infobox, but currently, we run into trouble once we fuse this data. 

Further information about sync targets and their challenges can be found on our Meta-Wiki discussion page, where Wikipedians that deal with infoboxes on a regular basis can also share their insights on the matter. Some issues were also found regarding the mapping of properties. In order to make GlobalFactSync as applicable as possible, we rely on the DBpedia community to help us improve the mappings. If you are interested in participating, we will connect with you at http://mappings.dbpedia.org and in the DBpedia forum.  

BottomlineWe value your feedback

Your DBpedia Association

Vítejte v Praze!

After our meetups in Poland and France last year, we delighted the Czech DBpedia community with a DBpedia meetup. It was co-located with the XML Prague conference on February 7th, 2019.

First and foremost, we would like to thank Jirka Kosek (University of Economics, Prague), Milan Dojchinovski (AKSW/KILT, Czech Technical University in Prague), Tomáš Kliegr (KIZI/University of Economics, Prague) and, the XML Prague conference for co-hosting and support the event.

Opening the DBpedia community meetup

The Czech DBpedia community and the DBpedia Databus were in the focus of this meetup. Therefore, we invited local data scientists as well as DBpedia enthusiasts to discuss the state-of-the-art of the DBpedia databus. Sebastian Hellmann (AKSW/KILT) opened the meeting with an introduction to DBpedia and the DBpedia Databus. Following, Marvin Hofer explained how to use the DBpedia databus in combination with the Docker technology and, Johannes Frey (AKSW/KILT) presented the methods behind the DBpedia’s Data Fusion and Global ID Management.

Showcase Session

Marek Dudáš (KIZI/UEP) started the DBpedia Showcase Session with a presentation on “Concept Maps with the help of DBpedia”, where he showed the audience how to create a “concept map” with the ContextMinds application. Furthermore, Tomáš Kliegr (KIZI/UEP) presented “Explainable Machine Learning and Knowledge Graphs”. He explained his contribution to a rule-based classifier for business use cases. Two other showcases followed: Václav Zeman (KIZI/UEP), who presented “RdfRules: Rule Mining from DBpedia” and Denis Streitmatter (AKSW/KILT), who demonstrated the “DBpedia API”.

Miroslav Blasko presents “Ontology-based Dataset Exploration”

Closing this Session, Miroslav Blasko (CTU, Prague) gave a presentation on “Ontology-based Dataset Exploration”. He explained a taxonomy developed for dataset description. Additionally, he presented several use cases that have the main goal of improving content-based descriptors.

Summing up, the DBpedia meetup in Prague brought together more than 50 DBpedia enthusiasts from all over Europe. They engaged in vital discussions about Linked Data, the DBpedia databus, as well as DBpedia use cases and services.

 

 

 

In case you missed the event, all slides and presentations are available on our website. Further insights  feedback, and photos about the event can be found on Twitter via #DBpediaPrague.

We are currently looking forward to the next DBpedia Community Meeting, on May 23rd, 2019 in Leipzig, Germany. This meeting is co-located with the Language, Data and Knowledge (LDK) conference. Stay tuned and check Twitter, Facebook and the website or subscribe to our newsletter for the latest news and updates.

Your DBpedia Association

A year with DBpedia – Retrospective Part 3

This is the final part of our journey around the world with DBpedia. This time we will take you from Austria, to Mountain View, California and to London, UK.

Come on, let’s do this.

Welcome to Vienna, Austria  – Semantics

More than 110 DBpedia enthusiasts joined our Community Meeting in Vienna, on September 10th, 2018. The event was again co-located with SEMANTiCS, a very successful collaboration. Lucky us, we got hold of two brilliant Keynote speakers, to open our meeting. Javier David Fernández García, Vienna University of Economics, opened the meeting with his keynote Linked Open Data cloud – act now before it’s too late. He reflected on challenges towards arriving at a truly machine-readable and decentralized Web of Data. Javier reviewed the current state of affairs, highlighted key technical and non-technical challenges, and outlined potential solution strategies. The second keynote speaker was Mathieu d’Aquin, Professor of Informatics at the Insight Centre for Data Analytics at NUI Galway. Mathieu, who is specialized in data analytics, completed the meeting with his keynote Dealing with Open-Domain Data.

The 12th edition of the DBpedia Community Meeting also covered a special chapter session, chaired by Enno Meijers, from the Dutch DBpedia Language Chapter. The speakers presented the latest technical or organizational developments of their respective chapter. This session has mainly created an exchange platform for the different DBpedia chapters. For the first time, representatives of the European chapters discussed problems and challenges of DBpedia from their point of view. Furthermore, tools, applications, and projects were presented by each chapter’s representative.

In case you missed the event, a more detailed article can be found here. All slides and presentations are also available on our Website. Further insights, feedback, and photos about the event are available on Twitter via #DBpediaDay.

Welcome to Mountain View  – GSoC mentor summit

GSoC was a vital part of DBpedia’s endeavors in 2018. We had three very talented students that with the help of our great mentors made it to the finish line of the program. You can read about their projects and success story in a dedicated post here.

After a successful 3-month mentoring, two of our mentors had the opportunity to attend the annual Google Summer of Code mentor summit. Mariano Rico and Thiago Galery represented DBpedia at the event this year. They engaged in a vital discussion about this years program, about lessons learned, highlights and drawbacks they experienced during the summer. A special focus was put on how to engage potential GSoC students as early as possible to get as much commitment as possible. The ideas the two mentors brought back in their suitcases will help to improve DBpedia’s part of the program for 2019. And apparently, chocolate was a very big thing there ;).

In case you have a project idea for GSoC2019 or want to mentor a DBpedia project next year, just drop us a line via dbpedia@infai.org. Also, as we intend to participate in the upcoming edition, please spread the word amongst students, and especially female students,  that fancy spending their summer coding on a DBpedia project. Thank you.

 

Welcome to London, England – Connected Data London 2018

In early November, we were invited to Connected Data London again. After 2017 this great event seems to become a regular in our DBpedia schedule.

Executive Director of the DBpedia Association, Sebastian Hellmannparticipated as panel candidate in the discussion around “Building Knowledge Graphs in the Real World”. Together with speakers from Thomson Reuters, Zalando, and Textkernel, he discussed definitions of KG, best practices of how to build and use knowledge graphs as well as the recent hype about it.

Visitors of CNDL2018 had the chance to grab a copy of our brand new flyer and exchange with us about the DBpedia Databus. This event gave us the opportunity to already met early adopters of our databus  – a decentralized data publication, integration, and subscription platform. Thank you very much for that opportunity.

A year went by

2018 has gone by so fast and brought so much for DBpedia. The DBpedia Association got the chance to meet more of DBpedia’s language chapters, we developed the DBpedia Databus to an extent that it can finally be launched in spring 2019. DBpedia is a community project relying on people and with the DBpedia Databus, we create a platform that allows publishing and provides a networked data economy around it. So stay tuned for exciting news coming up next year. Until then we like to thank all DBpedia enthusiasts around the world for their research with DBpedia, and support and contributions to DBpedia. Kudos to you.

 

All that remains to say is have yourself a very merry Christmas and a dazzling New Year. May 2019 be peaceful, exciting and prosperous.

 

Yours – being in a cheerful and festive mood –

 

DBpedia Association

 

A year with DBpedia – Retrospective Part Two

Retrospective Part II. Welcome to the second part of our journey around the world with DBpedia. This time we are taking you to Greece, Germany, to Australia and finally France.

Let the travels begin.

Welcome to Thessaloniki, Greece & ESWC

DBpedians from the Portuguese Chapter presented their research results during ESWC 2018 in Thessaloniki, Greece.  the team around Diego Moussalem developed a demo to extend MAG  to support Entity Linking in 40 different languages. A special focus was put on low-resources languages such as Ukrainian, Greek, Hungarian, Croatian, Portuguese, Japanese and Korean. The demo relies on online web services which allow for an easy access to (their) entity linking approaches. Furthermore, it can disambiguate against DBpedia and Wikidata. Currently, MAG is used in diverse projects and has been used largely by the Semantic Web community. Check the demo via http://bit.ly/2RWgQ2M. Further information about the development can be found in a research paper, available here

 

Welcome back to Leipzig Germany

With our new credo “connecting data is about linking people and organizations”, halfway through 2018, we finalized our concept of the DBpedia Databus. This global DBpedia platform aims at sharing the efforts of OKG governance, collaboration, and curation to maximize societal value and develop a linked data economy.

With this new strategy, we wanted to meet some DBpedia enthusiasts of the German DBpedia Community. Fortunately, the LSWT (Leipzig Semantic Web Tag) 2018 hosted in Leipzig, home to the DBpedia Association proofed to be the right opportunity.  It was the perfect platform to exchange with researchers, industry and other organizations about current developments and future application of the DBpedia Databus. Apart from hosting a hands-on DBpedia workshop for newbies we also organized a well-received WebID -Tutorial. Finally,  the event gave us the opportunity to position the new DBpedia Databus as a global open knowledge network that aims at providing unified and global access to knowledge (graphs).

Welcome down under – Melbourne Australia

Further research results that rely on DBpedia were presented during ACL2018, in Melbourne, Australia, July 15th to 20th, 2018. The core of the research was DBpedia data, based on the WebNLG corpus, a challenge where participants automatically converted non-linguistic data from the Semantic Web into a textual format. Later on, the data was used to train a neural network model for generating referring expressions of a given entity. For example, if Jane Doe is a person’s official name, the referring expression of that person would be “Jane”, “Ms Doe”, “J. Doe”, or  “the blonde woman from the USA” etc.

If you want to dig deeper but missed ACL this year, the paper is available here.

 

Welcome to Lyon, France

In July the DBpedia Association travelled to France. With the organizational support of Thomas Riechert (HTWK, InfAI) and Inria, we finally met the French DBpedia Community in person and presented the DBpedia Databus. Additionally, we got to meet the French DBpedia Chapter, researchers and developers around Oscar Rodríguez Rocha and Catherine Faron Zucker.  They presented current research results revolving around an approach to automate the generation of educational quizzes from DBpedia. They wanted to provide a useful tool to be applied in the French educational system, that:

  • helps to test and evaluate the knowledge acquired by learners and…
  • supports lifelong learning on various topics or subjects. 

The French DBpedia team followed a 4-step approach:

  1. Quizzes are first formalized with Semantic Web standards: questions are represented as SPARQL queries and answers as RDF graphs.
  2. Natural language questions, answers and distractors are generated from this formalization.
  3. We defined different strategies to extract multiple choice questions, correct answers and distractors from DBpedia.
  4. We defined a measure of the information content of the elements of an ontology, and of the set of questions contained in a quiz.

Oscar R. Rocha and Catherine F. Zucker also published a paper explaining the detailed approach to automatically generate quizzes from DBpedia according to official French educational standards. 

 

 

Thank you to all DBpedia enthusiasts that we met during our journey. A big thanks to

With this journey from Europe to Australia and back we provided you with insights into research based on DBpedia as well as a glimpse into the French DBpedia Chapter. In our final part of the journey coming up next week, we will take you to Vienna,  San Francisco and London.  In the meantime, stay tuned and visit our Twitter channel or subscribe to our DBpedia Newsletter.

 

Have a great week.

Yours DBpedia Association

Retrospective: GSoC 2018

With all the beta-testing, the evaluations of the community survey part I and part II and the preparations for the Semantics 2018 we lost almost sight of telling you about the final results of GSoC 2018. Following we present you a short recap of this year’s students and projects that made it to the finishing line of GSoC 2018.

 

Et Voilà

We started out with six students that committed to GSoC projects. However, in the course of the summer, some dropped out or did not pass the midterm evaluation. In the end, we had three finalists that made it through the program.

Meet Bharat Suri

… who worked on “Complex Embeddings for OOV Entities”. The aim of this project was to enhance the DBpedia Knowledge Base by enabling the model to learn from the corpus and generate embeddings for different entities, such as classes, instances and properties.  His code is available in his GitHub repository. Tommaso Soru, Thiago Galery and Peng Xu supported Bharat throughout the summer as his DBpedia mentors.

Meet Victor Fernandez

.. who worked on a “Web application to detect incorrect mappings across DBpedia’s in different languages”. The aim of his project was to create a web application and API to aid in automatically detecting inaccurate DBpedia mappings. The mappings for each language are often not aligned, causing inconsistencies in the quality of the RDF generated. The final code of this project is available in Victor’s repository on GitHub. He was mentored by Mariano Rico and Nandana Mihindukulasooriya.

Meet Aman Mehta

.. whose project aimed at building a model which allows users to query DBpedia directly using natural language without the need to have any previous experience in SPARQL. His task was to train a Sequence-2-Sequence Neural Network model to translate any Natural Language Query (NLQ) into the corresponding sentence encoding SPARQL query. See the results of this project in Aman’s GitHub repositoryTommaso Soru and Ricardo Usbeck were his DBpedia mentors during the summer.

Finally, these projects will contribute to an overall development of DBpedia. We are very satisfied with the contributions and results our students produced.  Furthermore, we like to genuinely thank all students and mentors for their effort. We hope to be in touch and see a few faces again next year.

A special thanks goes out to all mentors and students whose projects did not make it through.

GSoC Mentor Summit

Now it is the mentors’ turn to take part in this year GSoC mentor summit, October 12th till 14th. This year, Mariano Rico and Thiago Galery will represent DBpedia at the event. Their task is to engage in a vital discussion about this years program, about lessons learned, highlights and drawbacks they experienced during the summer. Hopefully, they return with new ideas from the exchange with mentors from other open source projects. In turn, we hope to improve our part of the program for students and mentors.

Sit tight, follow us on Twitter and we will update you about the event soon.

Yours DBpedia Association

DBpedia Chapters – Survey Evaluation – Episode One

DBpedia Chapters – Challenge Accepted

The DBpedia community currently comprises more than 20 language chapters, ranging from  Basque, Japanese to Portuguese and Ukrainian. Managing such a variety of chapters is a huge challenge for the DBpedia Association because individual requirements are as diverse as the different languages the chapters represent. There are chapters that started out back in 2012 such as DBpediaNL. Others like the Catalan chapter are brand new and have different haves and needs.

So, in order to optimize chapter development, we aim to formalize an official DBpedia Chapter Consortium. It permits a close dialogue with the chapters in order to address all relevant matters regarding communication, organization as well as technical issues. We want to provide the community with the best basis to set up new chapters and to maintain or develop the existing ones.

Our main targets for this are to: 

  • improve general chapter organization,
  • unite all DBpedia chapters with central DBpedia,
  • promote better communication and understanding and,
  • create synergies for further developments and make easier the access to information about which is done by all DBpedia bodies

As a first step, we needed to collect information about the current state of things.  Hence, we conducted two surveys to collect the necessary information. One was directed at chapter leads and the other one at technical heads. 

In this blog-post, we like to present you the results of the survey conducted with chapter leads.  It addressed matters of communication and organizational relevance. Unfortunately, only nine out of 21 chapters participated, so the respective outcome of the survey speaks only for roughly 42% of all DBpedia chapters.

Chapter-Survey  – Episode One

Most chapters have very little personnel committed to the work done for the chapter, due to different reasons. 66 % of the chapters have only one till four people being involved in the core work. Only one chapter has about ten people working on it.

Overall, the chapters use various marketing channels for promotion, visibility and outreach. The website as well as event participation, Twitter and Facebook are among the most favourite channels they use. 

The following chart shows how chapters currently communicate organizational and communication issues in their respective chapter and to the DBpedia Association.

 

 

The second one explicit that ⅓ of the chapters favour an exchange among chapters and with the DBpedia Association via the discussion mailing list as well as regular chapter calls.

 

The survey results show that 66,6% of the chapters currently do not consider their current mode of communication efficient enough. They think that their communication with the DBpedia Association should improve.

 

As pointed out before, most chapters only have little personnel resources. It is no wonder that most of them need help to improve the work and impact of chapter results. The following chart shows the kind of support chapters require to improve their overall work, organization and communication. Most noteworthy, technical, marketing and organization support are hereby the top three aspects the chapters need help with. 

 

 

The good news is all of the chapters maintain a DBpedia Website. However, the frequency of updates varies among them. See the chart on the right.

 

 

 

Earlier this year, we announced that we like to align all chapter websites with the main DBpedia website. That includes a common structure and a corporate design, similar to the main one.  Above all, this is important for the overall image and recognition factor of DBpedia in the tech community. With respect to that, we inquired whether chapters would like to participate in an alignment of the websites or not.

 

 

 

With respect to marketing support, the chapters require from the Association, more than 50% of the chapters like to be frequently promoted via the main DBpedia twitter channel.

 

 

Good news: just forward us your news or tag us with @dbpedia and we will share ’em.

Almost there.

Finally, we asked about chapters requirements to improve their work and, the impact of their chapters’ results. 

 

Bottom line

All in all, we are very grateful for your contribution. Those data will help us to develop a strategy to work towards the targets mentioned above. We will now use this data to conceptualize a little program to assist chapters in their organization and marketing endeavours. Furthermore, the information given will also help us to tackle the different issues that arose, implement the necessary support and improve chapter development and chapter visibility.

In episode two, we will delve into the results of the technical survey. Sit tight and follow us on Twitter, Facebook, LinkedIn or subscribe to our newsletter.

Finally, one last remark. If you want to promote news of your chapter or otherwise like to increase its visibility, you are always welcome to:

  • forward us the respective information to be promoted via our marketing channels 
  • use your own Twitter channel and tag your post with @dbpedia,  so we can retweet your news. 
  • always use #dbpediachapters

Looking forward to your news.

Yours

DBpedia Association

Beta-Test Updates

While everyone at the DBpedia Association was preparing for the SEMANTiCS Conference in Vienna, we also managed to reach an important milestone regarding the beta-test for our data release tool.

First and foremost, already 3500 files have been published with the plugin. These files will be part of the new DBpedia release and are available on our LTS repository.

Secondly, the documentation of the testing has been brought into good shape. Feel free to drop by and check it out.
Thirdly, we reached our first interoperability goal. The current metadata is sufficient to produce RSS 1.0 feeds. See here for further information. We also defined a loose roadmap on top of the readme, where interoperability to DCAT and DCAT-AP has high priority.

 

Now we have some time to support you and work one on one and also prepare the configurations to help you set up the data releases. Lastly, we already received data from DNB and SUMO, so we will start to look into these more closely.

Thanks to all the beta-testers for your nice work.

We keep you posted.

Yours

DBpedia Association

Meet the DBpedia Chatbot

This year’s GSoC is slowly coming to an end with final evaluations already being submitted. In order to bridge the waiting time until final results are published, we like to draw your attention to a former project and great tool that was developed during last years’ GSoC.

Meet the DBpedia Chatbot. 

DBpedia Chatbot is a conversational Chatbot for DBpedia which is accessible through the following platforms:

  1. A Web Interface
  2. Slack
  3. Facebook Messenger

Main Purpose

The bot is capable of responding to users in the form of simple short text messages or through more elaborate interactive messages. Users can communicate or respond to the bot through text and also through interactions (such as clicking on buttons/links). There are 4 main purposes for the bot. They are:

  1. Answering factual questions
  2. Answering questions related to DBpedia
  3. Expose the research work being done in DBpedia as product features
  4. Casual conversation/banter
Question Types

The bot tries to answer text-based questions of the following types:

Natural Language Questions
  1. Give me the capital of Germany
  2. Who is Obama?
Location Information
  1. Where is the Eiffel Tower?
  2. Where is France’s capital?
Service Checks

Users can ask the bot to check if vital DBpedia services are operational.

  1. Is DBpedia down?
  2. Is lookup online?
Language Chapters

Users can ask basic information about specific DBpedia local chapters.

  1. DBpedia Arabic
  2. German DBpedia
Templates

These are predominantly questions related to DBpedia for which the bot provides predefined templatized answers. Some examples include:

  1. What is DBpedia?
  2. How can I contribute?
  3. Where can I find the mapping tool?
Banter

Messages which are casual in nature fall under this category. For example:

  1. Hi
  2. What is your name?

if you like to have a closer look at the internal processes and how the chatbot was developed, check out the DBpedia GitHub pages. 

DBpedia Chatbot was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.

Powered by WPeMatico

In case you want your DBpedia based tool or demo to publish on our website just follow the link and submit your information, we will do the rest.

 

Yours

DBpedia Association