geistlogistic

Information require attention

Flower

Archive for the ‘privacy protection’ Category

Prospering future for Learning Analytics

Learning Analytics is a hot research topic at the moment and I’m curious what impact it will have on the education systems on the long term. However, at the moment it is of high importance on all research agendas. It is even an explicit research topic within the next EU FP7 TEL call in January 2013. At CELSTEC we have recently won two new EU projects that are directly supporting our Learning Analytics research efforts:

Logo Open Discovery Space
Open Discovery Space (Started 1st of April 2012) LinkedUp (Start’s 1st of November 2012)

Both projects addressing the main research challenges we identified during the dataTEL project. Based on those we have identified 6 main research objectives for the upcoming years:

  1. Collecting, sharing and open access to educational datasets
  2. Evaluation of data-driven applications
  3. Legal aspects (Ownership, Privacy, ethics)
  4. Visualizations of data
  5. Personalization and Recommender Systems
  6. Awareness support and reflection

Regarding research objective 1 – Educational data:

Open Discovery Space (ODS) and LinkedUP will make vast amounts of educational data available for end users and for data driven research. The Open Discovery Space project will be based on the ARIADNE Foundation infrastructure that has been used to already deploy an initial version of the resources at the portal that provide access to a critical mass of about 1.000.000 content resources. This existing critical mass of eLearning resources will be expanded over the runtime of the project up to ~1,550,000 resources in total. It is expected to be connected to around 15 educational portals of regional, national or thematic coverage. Besides providing the educational resources ODS will create technology to share and collect also social data about the educational resources (ratings, tags and annotations) and make them available as Linked Data. With these objectives ODS contributes to the research objectives of the Learning Analytics and Linked Data workshop we organized at the LAK12 conference.

LinkedUp also aims to make more educational datasets publicly accessible. It will therefore create a pool of existing educational datasets and organize various support and trainings activities around this data pool to stimulate the development of new and innovative data driven tools for Technology-Enhanced Learning and Learning Analytics.

LinkedUp will therefore strongly follow the Linked Data approach which has been applied successfully in a wide area of domains to expose datasets from a large variety of sources, leading to a globally distributed Web cloud of over 31 billion distinct statements. The following table provides an overview of the currently available datasets in the LOD cloud (source: http://lod-cloud.net/state).

Next to Open Educational Resources and Linked Data will LinkedUp also consider publicly accessible data from data-driven companies such as Open Calais Reuter or Mendeley. These companies provide access to their data over API’s that can be used to develop innovative data products within the LinkedUp competition.

Regarding research objective 2 – Evaluation of data applications:

There is a pressing need in Learning Analytics to make the effects of different data applications on learning and the stakeholders comparable to identify best practice examples. Until now there is no common knowledge which algorithm works better than another with a certain user model in a specific learning settings. LinkedUp directly address this challenge by developing an evaluation framework that can be applied to evaluated data –driven applications. The evaluation framework will be one of the major outcomes of the project. It will be developed together with a board of 30 experts in the field through the Group Concept Mapping approach.

Regarding research objective 3 – Legal aspects:

In this context both projects have to come up with solutions that enable the use of educational data for data support applications. Both projects will therefore mainly focus on the creative commons license model. All data sets for which this is appropriate shall be published on the project’s web site under a Creative Commons licence (http://creativecommons.org/) or another appropriate license. In addition we want to explore related initiatives like the Creative Commons Learning Resource Metadata Initiative (LRMI) that aims to merge different competing initiatives in the area of OER description and at producing a usable and well-defined RDF schema for Learning Resource description (http://wiki.creativecommons.org/LRMI). Regarding, privacy and ethics both projects will review privacy requirements and concerns in each participating country in order to develop a suitable IPR & licensing agreement for the data pools.

Research objectives 4-6 – Visualizations, Personalization, and Awareness support:

These research objectives will also be addressed by both projects at a later stage. ODS addresses all three research objectives by providing innovative navigation and visualizations tools to explore the vast amount of collected data within the ODS portal in a personalized way.  We will investigate how to combine visualization and social navigation to increase the satisfaction of users when searching for resources as well as explaining the rationale for the various selections or recommendations. Within LinkedUp we will support various projects that focus on these research objectives within the LinkedUp competition. We will organize three data competitions and support the participants with suitable datasets, technology support workshops, and provide substantial funding based on the assessment of the tools of the participating teams with the evaluation framework.

Looking forward to these exciting research activities!

Presentation on Confidence in Learning Analytics / The Pulse of LA

Below the presentation of the paper written by Wolfgang Greller and myself and our international survey on Confidence in Learning Analytics at the LAK12 conference, Vancouver, Canada.  The framework study was rated by many stakeholders as very helpful to describe the current needs of the young Learning Analytics field. There are quite some pointers to the study made by other researchers like SURF or OUUK. It was rested as the most helpful model to introduce learning analytics and the core research challenges to related stakeholders.

Confidence in Learning Analytics aka. The Pulse of Learning Analytics View more presentations from Hendrik Drachsler 

The article reported the results of an exploratory community survey in learning analytics that aimed at extracting the perceptions, expectations and levels of understanding of stakeholders in the domain. Divided up into six different dimensions we came to a number of conclusions which we are going to present below.

- Stakeholders: Participants identified the main beneficiaries in learning analytics as learners and teachers followed by organisations. Furthermore, the majority of respondents agreed that the biggest benefits would be gained in the teacher-to-student relationship and that learners would almost certainly require teacher help to learn from an analysis and for taking the right course of action. This is rather surprising as learning analytics is seen by many researchers as an innovative liberating force that would be able to change traditional learning by reflection and peer support, thus strengthening independent and lifelong learning. This latter opinion on independence could be seen in the ‘objective’ section of the survey (cf. chapter 3.2 above) where the majority expressed a preference for learning analytics to pay special attention to non-formalised and innovative ways of teaching and learning. Yet, respondents expect less potential impact on the student-to-student and the teacher-to-teacher relationships. This current perspective may be affected by the scarcity of learning analytics applications that demonstrate the innovative possibilities for learning and teaching. Thus people may not have a clear point of reference as, for example, is the case for ‘social networks’ where an established group of competitive platforms already exists.

- Objectives: The survey concludes further that research on learning analytics should focus on reflection support. The attained results clearly emphasized the importance of ‘stimulating reflection in the stakeholders about their own performance’. This goal could be supported by revealing hitherto hidden information about learners, which was the second most important objective. At the same time more timely information, institutional insights, and insights into the learning context were other areas of interest to the constituency.

- Data: Our institutional inventory in chapter 3.3 gives an overview of the most widespread IT systems. These could be prioritised by learning analytics technologies to gain an institutional foothold. They also provide the best ground for inter-institutional data sharing. Anonymisation can perhaps be seen as the most important enabler for such sharing to happen. It is emphasised in a number of responses as the second most important data attribute and confirmed in the willingness of people to share if data is anonymised. For a clear majority anonymisation also reduces fears of privacy breaches through sharing (cf. chapter 3.5). On the other hand, when it comes to internal sharing with departments and operations’ units of the same institution, the use of available data will continue to be an uphill struggle, and, according to participants, require good justification. Here, perhaps, a clearer mandate to ethical boards may help. These are already widely in place.

- Methods: Chapter 3.4 on methods revealed that trust in learning analytics algorithms is not well developed. We interpret the mid-range return levels as hesitation towards “calculating” education and learning. What seems interesting to us is that the widely interpretable hope for gaining a comprehensive view on the learning progress was given the highest confidence, but perhaps this shows wishful thinking rather than a real expectation. Overall rather low was the expectations of impact on assessment. A majority of people did not see easier or more objective assessments coming out of learning analytics (cf. chapter 3.2). They were also not fully convinced that it would provide a good assessment of a learner’s state of knowledge (cf. chapter 3.4).

- Constraints: A large proportion of respondents thought learning analytics may lead to breaches of privacy and intrusion. Yet, they ranked privacy and ethical aspects as of lesser importance to consider (cf. chapter 3.5) or as belonging to further competence development (cf. chapter 3.6). However, data ownership was expressed as highly important. This may be interpreted in that way that if ownership of data lies with the learners themselves, there is no perceived risk for privacy or ethical abuse. In any case, it seems that many organisations have ethical boards and guidelines in place. These may come to play an increasingly important role for institutional data exploitation since a large number of respondents trust that anonymisation of educational data is possible but not necessarily sufficient to enable full internal exploitation of the educational data within an organisation.

- Competences: In the area of competences, participants mainly stressed the importance of self-directedness, critical reflection, analytic skills, and evaluation skills. On the other hand, few believe that students already possess these skills. This indicates to us a need to support students in developing these learning analytics competences. In conclusion of this section we can say, that the results suggest that there is little faith that learning analytics will lead to more independence of learners to control and manage their learning process. This identifies a clear need to guide students to more self-directedness and critical reflection if learning analytics should be applied more broadly in education. This interpretation is quite in contrast with some suggestions made with respect to empowerment of learners through providing graphical reflection of the learning process and further access to additional information regarding their learning progress.

The dataset used for this article and a pre-print of the study is available at the dspace.ou.nl repository (at http://dspace.ou.nl/handle/1820/3850). In that way, we would like to encourage the learning analytics community to gain additional insights from our dataset for the fast evolving of the learning analytics research topic.


Confidence in Learning Analytics – Survey Analysis – Part 1

While Learning Analytics is currently a very hot topic in the domain of TEL the impression one currently gets is that there is also much uncertainty and hesitation, about it. A clear common understanding and vision for the domain has not yet formed among the educator and research community.

To further investigate this situation the Learning Analytics topic conducted a stakeholder survey in September 2011 with an international audience from different sectors of education. We promoted the questionnaire at the Learning Analytics seminar at the Dutch SURF foundation. We then went on to distribute the questionnaire through the JISC network in the UK and via social media channels of relevant networks like the Google group on learning analytics, the SIG dataTEL at TELeurope, the Adaptive Hypermedia and the Dutch computer science (SIKS) mailing lists and to participants in international massive open online courses (MOOCs) in technology enhanced learning (TEL) using social network channels like facebook, twitter, LinkedIn, and XING.

We received a limited response rate from Romance countries (France, Iberia, Latin America) against a high return from Anglo-Saxon countries. The lack of responses from countries like Russia, China or India, maybe due to a number of factors: the distribution networks not reaching these countries, the language of the questionnaire (English), or a general lack of awareness of learning analytics in these countries. Still, we found that with the numbers of returns, we received a meaningful number of people interested in the domain.

After removal of invalid responses we analysed answers from 156 participants, with 121 people (78%) completing the survey in full. In total, the survey now covers responses from 31 countries, with the highest concentrations in the UK (38), the US (30), and the Netherlands (22) (see Figure 1 below).

 

Geographic distribution of responses for Learning Analytics survey

The findings provide some further insights into the current level of understanding and expectations toward Learning Analytics among stakeholders. The survey results among 156 educational practitioners and researchers mostly from the higher education sector reveals substantial uncertainties in learning analytics. The survey was scaffolded by our conceptual framework on Learning Analytics presented in the topic description.

A pre-print of the related research article and the anonymised survey data are publicly available in our dspace environment [HERE]. The survey results will be presented at the 2nd Conference on Learning Analytics and Knowledge (LAK’12) 29.04. – 02.05.2012 in Vancouver Canada.

In a series of blog posting, we will discuss the detailed findings of the survey and further introduce and discuss the Learning Analytics framework. Feel free to read through the related paper and post your questions here. We highly welcome additional analysis and new insights based on the provided survey data.

Furthermore, there will be some exciting developments around the Learning Analytics framework, as it tends to be useful for other researchers and practitioners in the field. Recently, the JISC foundation located in the UK referred to it on their http://www.activedata.org website. Looking forward to their responses as well.

In the next blog posting we will describe the questionniare design, some statistics about the participants and first results regarding the ‘Stakeholder’ and ‘Objectives’ domains of the framework.

1st International Workshop on Learning Analytics and Linked Data (#LALD2012)

************************************************
CALL FOR PAPERS
1st International Workshop on Learning Analytics and Linked Data (#LALD2012)
in conjunction with the 2nd Conference on Learning Analytics and Knowledge (LAK’12)
29.04. – 02.05.2012, Vancouver (Canada).

Jointly organized by the http://linkededucation.org initiative and the EATEL SIG dataTEL (http://bit.ly/datatel).

Workshop website: http://lald.linkededucation.org/
Submission deadline full and short papers: 14.03.2012
Submission deadline extended abstracts  : 10.04.2012
************************************************

SCOPE
The main objective of the 1st International Workshop on Learning Analytics and Linked Data (#LALD2012) is to connect the research efforts on Linked Data and Learning Analytics to create visionary ideas [a] and foster synergies between both young research fields. Therefore, the workshop will collect, explore, and present datasets, technologies and applications [b] for Technology-Enhanced Learning (TEL) to discuss Learning Analytics approaches which make use of educational data or Linked Data sources. During the workshop, an overview of available educational datasets and related initiatives will be given. The participants will have the opportunity to present their own research with respect to educational datasets, technologies and applications and discuss major challenges to collect, reuse and share these datasets.

BACKGROUND
In TEL, a multitude of datasets exists containing detailed observations of events in learning environments [c]that offer new opportunities for teaching and learning. The available datasets can be roughly distinguished between (a) Linked Data – Open Web Data and (b) Personal learning data from different learning environments.

Open Web data covers educational data publicly available on the Web, such as Linked Open Data (LOD) published by institutions about their courses and other resources; examples include (but are not limited to), The Open University (UK), the National Research Council (CNR, Italy), Southampton University (UK) or the mEducator Linked Educational Resources. It also includes the emergence of LD-based metadata schemas and TEL-related datasets. The main driver in the adoption of the LOD approach in the educational domain is the enrichment of the learning content and the learning experience by making use of various connected data sources.

Personal learning data from learning environments originate from tracking learners’ interactions with tools, resources or peers[d]. The main driver for analyzing these data is the vision of personalized learning that offers potential to create more effective learning experiences through new possibilities for predicting and reflecting the individual learning process.

To this end, Learning Analytics can be seen as an approach which brings together two different views: (i) the external view on publicly available Web data and (ii) an internal view on personal learner data, e.g. data about individual learning activities and histories. Learning Analytics aims at combining these two in a smart and innovative way to enable advanced educational services, such as recommendation (a) of suitable educational resources to individual learners, (b) peer students or external expert to cooperate with.

TOPICS
The workshop is looking for contributions touching the following topics.

Educational (Linked) Data
- Evaluating, promoting, creating and clustering of educational datasets, schemas and vocabularies
- Use of LOD for educational purposes
- Feasibility of standardization of educational datasets to enable exchange and interoperability
- Sharing of educational datasets among TEL researchers

Data Technologies:
- Technologies for the exploration of educational datasets, i.e., for filtering, interlinking, exposing, adapting, converting and visualizing educational datasets
- Real-world applications that show a measurable impact of Learning Analytics
- Real-world educational applications that exploit the Web of Data
- Tools to use and exploit educational Linked Open Data[e]
- Innovative TEL applications that make large-scale use of the available open Web of data

Evaluation of Technologies and Datasets:
- (Standardized) evaluation methods for Learning Analytics
- Descriptions of data competitions

Privacy and Ethics:
- Policies on ethical implications of using educational data for learning analytics (privacy and legal protection rights)
- Guidelines for the anonymisation and sharing of educational data for Learning Analytics research

SUBMISSION
The workshop is looking for different types of submissions. We accept regular full paper (8-14 pages), short paper (4-6 pages). Moreover, we are interested in anonymized datasets that can then be openly used in evaluating TEL recommender systems. Above all, we encourage you to demonstrate your data products and tools even if they are in a premature state. Datasets and demonstrations should be submitted together with an extended abstract submissions (up to 2 pages). For all paper submissions we require formatting according to the Springer LNCS template http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0

Submission should be submitted through the conference management tool ginkgo: http://ginkgo.cs.upb.de/events/lald12

All submitted papers will be peer-reviewed by at least two members of the program committee for originality, significance, clarity, and quality. Final versions of accepted submissions will be published in the CEUR-WS.org workshop proceedings and most promising contributions will be invited to the 2nd Special Issue on dataTEL at the International Journal of Technology Enhanced Learning (IJTEL). In addition, the authors are asked to contribute short summaries of their submissions to the dataTEL group space at TELeurope to encourage early information sharing and discussion also with third persons. Based on workshop submissions, the organizers will identify most pressing research challenges to structure the workshop.

Questions can be send to: hendrik.drachsler[at]ou.nl

IMPORTANT DATES
14.03.2012            Submission deadline for full and short papers
10.04.2012            Submission deadline for extended abstracts (describing data sets and demonstrations)
12.04.2012            Notification of acceptance
26.04.2012            Submission deadline for final papers
29.04.2012            Workshop
30.04. – 02.05.2012    LAK Conference

ORGANIZERS
Hendrik Drachsler; Open University of the Netherlands, NL
Stefan Dietze; L3S Research Center, DE
Mathieu d’Aquin; The Open University, UK
Wolfgang Greller; Open University of the Netherlands, NL
Jelena Jovanovic; University of Belgrade, SR
Abelardo Pardo; University Carlos III of Madrid, ES
Wolfgang Reinhardt; University of Paderborn, DE
Katrien Verbert; K.U.Leuven, BE

PROGRAMME COMMITTEE :
Markus Specht, Open University of the Netherlands, The Netherlands
Peter Sloep, Open University of the Netherlands, The Netherlands
Marco Kalz, Open University of the Netherlands, The Netherlands
Christian Glahn, ETH Zuerich, Switzerland
Erik Duval, K.U. Leuven, Belgium
Martin Wolpers, FIT Fraunhofer, Germany
Nikos Manouselis, Agro-Know Technologies, Greece
Olga Santos, aDeNu Research Group, UNED, Spain
Dragan Gasevic, Athabasca University, Canada
Felix Mödritscher, Vienna University of Economics and Business, Austria
Fridolin Wild, Open University, United Kingdom
Gawesh Jawaheer, City University London, United Kingdom
Ebner Hannes, Royal Institute of Technology (KTH), Sweden
Hanan Ayad, Desire2Learn, Canada
Melody Siadaty, Athabasca University, Canada
Philippe Cudré-Mauroux, University of Fribourg, Switzerland
Carsten Keßler, University of Münster, Germany
Davide Taibi, Institute for Educational Technologies, Italian National Research Council, Italy
Tom Heath, Talis, UK

Turning Learning into Numbers – A Learning Analytics Framework

The Dutch Higher Education Foundation -  SURF invited Wolfgang and me to a seminar on Learning Analytics, where we presented our Learning Analytics framework and a questionnaire that is build on top of it.
They brought some interesting parties from different educational institutions (schools -> universities) and some companies together.
One observations was that the companies mainly focus on business analytics for the educational sector and the management of an educational institute, whereas the educational designers and researchers tools presented to support students and teachers in improving learning and teaching.
That reminds me on the TEL recommender systems that were also applied in the beginning like in the MovieLens system and even used their datasets. They mainly recommended content from related persons without considering context of learners like learning goals or prior-knowledge levels to recommend peers, learning activities, or learning paths.

Wolfgang and I tried to paint the big picture of Learning Analytics with the framework and give some practical examples. Both parts of the audience (the companies and the educational institutes) found the framework rather useful to shape the goals of Learning Analytics applications.
What became clear from the educational institutes is that we need to provide solutions for the big players (Moodle, Blackboard or Sharepoint) when we want to run any experiments on Learning Analytics with them. Most of the educational providers in the Netherlands use one of these systems and any learning analytic tool needs to address them. Thus, after prototyping and having valuable outcomes you need to address one of the big systems to disseminate your learning analytic solutions to the stakeholder

Below you can find our presentation that received 650 clicks in 3 hours. That was really impressing. I received the following mail from slideshare:
“Turning Learning into Numbers – A Learning Analytics Framework” is being tweeted more than anything else on SlideShare right now. So we’ve put it on the homepage of SlideShare.net. 

Well done!

- SlideShare Team

Another indicator that Learning Analytics is a very hot topic. ;-)

Below you can find the presentation. Special thanks belong to Peter Kraker who provided us with his twitter visualization tool that enabled me to show some real time reflection examples of the seminar on Learning Analytics. Thanks Peter!

Grand Challenges of the dataTEL Theme Team presented at the ARV2011, La Clusaz, France

In the introduction slides below I briefly introduce which and how  Grand challenges of STELLAR are addressed by the dataTEL Theme Team.
We are mainly addressing the Grand Challenge 1. Contextualisation AND 2. Connecting Learner, because recommender systems can be supportive  for two tasks:

  1. The selection of most suitable information from the overwhelming amount of information in the network is a challenging task.
  2. Connecting learners in the network is very important to overcome their isolation and to create effective learning communities.

For both tasks contextualised information of learners need to be captured and exploited in order to create personal recommendations for learners. Recommender systems are promising towards these challenges as their technologies match users on defined characteristics and create a kind ‘neighborhood’ of like-minded users. In that way, recommender systems extract contextual information and offer valuable data to suggest suitable peer learners.

Schedule for dataTEL workshop at ARV2011

Here you can find the schedule of the 2 day dataTEL workshop at ARV2011. This time we will have two keynote speakers related to the dataTEL topics:

Shlomo Berkovsky (AU) and John Stamper (USA).

Shlomo Berkovsky is a Senior Research Scientist and Research Team Leader at the TLI project (CSIRO – Commonwealth Scientific and Industrial Research Organisation, Tasmanian ICT Centre). The project aims to provide individual users and their families with a personalized dietary and health information to help them to maintain a healthier lifestyle.
His research interests include user modeling and personalization. In particular, he is interested in recommender systems, collaborative and content-based filtering, mediation of user models, ubiquitous user modeling, context-aware personalization, personalized content generation, and use of machine learning and data mining techniques in user modeling and personalization.
Before joining CSIRO, he was a post-doctoral research fellow at the University of Melbourne. He graduated at the University of Haifa. The topic of my PhD was “Mediation of user models for enhanced personalization in recommender systems”.

John Stamper is the Technical Director of the Pittsburgh Science of Learning Center DataShop.  He is also a member of the research faculty at the Human-Computer Interaction Institute at Carnegie Mellon University.  His primary areas of research include Educational Data Mining and Intelligent Tutoring Systems.  John received his PhD in Information Technology from the University of North Carolina at Charlotte, holds an MBA from the University of Cincinnati, and a BS in Systems Analysis from Miami University.  Prior to returning to academia, John spent over ten years in the software industry.  John is a Microsoft Certified Systems Engineer (MCSE) and a Microsoft Certified Database Administrator (MCDBA). John was the co-chair of the 2010 KDD Cup Competition, titled “Educational Data Mining Challenge,” which centered on improving assessment of student learning via data mining.

On the 1st day Shlomo Berkovsky will give a keynote on: 

Setting Up a Data Contest
Abstract: Research contests have attracted attention in many areas, mainly due to their potential to boost research on a specific problem. Contests also facilitate a fair and objective evaluation means, as all the participants share the same data and task. This talk will focus on the details of organizing a research contest. Initially, we will overview several past contests: KDD Cup competition series, Netflix prize competition, and CAMRa challenge on context-aware recommendations. Then, we will discuss the essential components of a successful contest: selection of appropriate tasks, data processing and preparation, publicity and attraction of participants, and the logistics of carrying out the contest. Finally, we will spark the discussion on the upcoming I-KNOW dataTEL contest on predicting the performance of students with an intelligent tutoring system.

On the 2nd day John Stamper will give his keynote on:

DataShop: An Educational Data Mining Platform for the Learning Science Community
In this talk I will discuss my vision of creating a true platform for conducting educational data mining research. The talk will focus on DataShop, part of the Pittsburgh Science of Learning Center, which is an open data repository and set of associated visualization and analysis tools. DataShop has data from thousands of students deriving from interactions with on-line course materials and intelligent tutoring systems. The data is fine-grained, with student actions recorded roughly every 20 seconds, and it is longitudinal, spanning semester or yearlong courses. As of February 2011, over 245 datasets are stored including over 51 million student actions which equates to over 150,000 student hours of data. Most student actions are “coded” meaning they are not only graded as correct or incorrect, but are categorized in terms of the hypothesized competencies or knowledge components needed to perform that action. I plan to open the talk up as an interactive discussion in order
to answer questions related to some of the key issues we faced in developing an open data repository, including security, privacy, and data diversity.
Feel free to go to http://pslcdatashop.org to sign up for a free account and access DataShop prior to the workshop.

Based on the contributions of the participants we identified the following 4 most pressing topics of the workshop:
1. Topic: Evaluation of recommender systems in TEL
2. Topic: Data supported learning examples
3. Topic: Datasets from Learning Object Repositories and Web content
4. Topic: Privacy and data protection for dataTEL

We will tweet about the event and you are free to send you remarks by using the hashtag #datatel11.

Here you can find the detailed workshop schedule:

X-pire! – a software for data degradation

“We may be finished with the past, but the past is not finished with us.” This very famous quote from Paul Thomas Anderson film “Magnolia” is more true today than ever before. When you delete data from a web page nowadays you never know if it really disappeared from the web. Therefore, people and esp. younger ones are increasingly reminded to take care what kind of information they post on the Web because it is a public room.

The wish to add expire dates to data to get control over the personal information is heavily discussed on the Web and in several PhD thesis. A computer scientists from the University of Saarbruecken, Germany – Michael Backes Prof. on  Information Security and Cryptography developed  the first system that supports people to add an expiration date to any digital content. The system is based on a combination of  encryption and captacha technology that triggers the data and deletes it after the deadline. It is the first technical solution for these kind of problem, but I have my doubts that social networks like facebook will adapt to it.

Further information can be found here:

Recent Research on Recommender Systems in TEL

The first presentation this year was given at the Learning Network seminar series at CELSTEC. Special guest was Wolfgang Reinhardt from the University of Paderborn who provided his view on data science in relation to awareness improvement for knowledge workers. The dataTEL presentation is based on the ECTEL10 but it also includes the latest developments on TEL recommender after the dataTEL System Marketplace at the RecSysTEL workshop what was a point of change in the research community. In the presentation below I show some of the changes and the new developments. Surprisingly, we had a very controversial discussion, more controversial than at the ECTEL conference.

I sum up the comments shortly:

  • The collected datasets are far to small to conduct proper information retrieval on it.
  • But maybe they present the majority of datasets that are available in education so we have to adjust our techniques.

  • Not all of the datasets are related to learning, esp. Mendeley.
  • Yes, that is correct but why should we limit our self as it is already quite a challenge to get a datasets.

  • The privacy protection right will stay so we will never have the opportunity to use the student data from a LMS like Blackboard for further analysis.
  • But we could ask the students if they agree to give us there data for research purposes and be very explicit what we want to do with it. European Schoolnet did the same with the users of the eTwinning project.

You are currently browsing the archives for the privacy protection category.