Posts Tagged ‘recommender systems’
I’m currently reviewing some Learning Analytics papers and wondering if anyone of you knows about an evaluation framework / review guidelines for Learning Analytics?
I’m asking that because one of the objectives of Learning Analytics is the personalization of learning, but in order to measure what are promising steps towards this personalization we need to define benchmarks and evaluation criteria in form of a framework. Otherwise I can hardly compare the outcomes of one LA application / paper with another one.
Developing such an evaluation framework is not a very easy task, on the other hand many other related research fields achieved a kind of evaluation standard like in data mining and recommender system with the TREC conference for example. They focus on performance, accuracy, precision and recall measures that are rather technical but useful to show the effects of a certain technologies on a specific dataset.
Learning Analytics is not only about technical measures. At the end we want to support teachers and students in their learning process and make the educational system more transparent. But measuring an increase in the effectiveness (learning outcomes) and efficiency (study time) of the learning process takes most of the time much longer than testing the technical measures. The best well known educational evaluation measure is from Kirkpatrick (http://www.businessballs.com/kirkpatricklearningevaluationmodel.htm) but that requires several pre- and post test in a longer timeperiod.
So any ideas about an evaluation framework for Learning Analytics?
By the way the ACM Recommender System conference applies the following evaluation criteria to their papers.
A good RecSys algorithms paper will:
• describe the recommender/ranking/prediction algorithm in sufficient detail that someone else could implement it
• articulate the important new idea(s) that the algorithm instantiates, in comparison to previously known algorithms
• demonstrate that performance is better on some well-defined metric, than some baseline algorithm.
A good RecSys paper reporting on a case study of an application deployment will:
• Identify a novel type of item to be recommended or decision process to be influenced, in comparison to previously reported targets of recommender systems.
• Identify unusual properties of the new item type that created special problems or opportunities
• Explain any non-trivial mappings of known techniques to the new domain
• Report on challenges and how they were overcome
• Articulate lessons that might be relevant to others deploying Recommender Systems in similar or related contexts
A good RecSys paper about a new way of using recommendations/prediction/ranking to enhance the user experience will:
• Clearly explain the presentation technique
• Articulate what is novel about it, in comparison to existing techniques
• Demonstrate that it has desirable properties for users, through anecdotes or data from lab studies or field deployment
The Dutch Higher Education Foundation - SURF invited Wolfgang and me to a seminar on Learning Analytics, where we presented our Learning Analytics framework and a questionnaire that is build on top of it.
They brought some interesting parties from different educational institutions (schools -> universities) and some companies together.
One observations was that the companies mainly focus on business analytics for the educational sector and the management of an educational institute, whereas the educational designers and researchers tools presented to support students and teachers in improving learning and teaching.
That reminds me on the TEL recommender systems that were also applied in the beginning like in the MovieLens system and even used their datasets. They mainly recommended content from related persons without considering context of learners like learning goals or prior-knowledge levels to recommend peers, learning activities, or learning paths.
Wolfgang and I tried to paint the big picture of Learning Analytics with the framework and give some practical examples. Both parts of the audience (the companies and the educational institutes) found the framework rather useful to shape the goals of Learning Analytics applications.
What became clear from the educational institutes is that we need to provide solutions for the big players (Moodle, Blackboard or Sharepoint) when we want to run any experiments on Learning Analytics with them. Most of the educational providers in the Netherlands use one of these systems and any learning analytic tool needs to address them. Thus, after prototyping and having valuable outcomes you need to address one of the big systems to disseminate your learning analytic solutions to the stakeholder
Below you can find our presentation that received 650 clicks in 3 hours. That was really impressing. I received the following mail from slideshare:
“Turning Learning into Numbers – A Learning Analytics Framework” is being tweeted more than anything else on SlideShare right now. So we’ve put it on the homepage of SlideShare.net. Well done! - SlideShare Team
Another indicator that Learning Analytics is a very hot topic.
Below you can find the presentation. Special thanks belong to Peter Kraker who provided us with his twitter visualization tool that enabled me to show some real time reflection examples of the seminar on Learning Analytics. Thanks Peter!
- Recommend knowledge people / peers
- Recommend a sequence of learning resources to achieve a certain competence
- Recommend learning activities rather than a learning resource (an item)!
Next to these tasks there are a couple of information support systems thinkable for tutors and teachers but when we consider the RecSys definition from Resnick these are not personalized RecSys rather than decision support systems.
In any case the research on context will have a major impact on the TEL RecSys of the future. Context-aware RecSys will increase the differences between e-commerce RecSys and RecSys for learning / TEL.
CfP dataTEL SI at International Journal on Technology Enhanced Learning (IJTEL) deadline for submission 25.10.2011
CALL FOR JOURNAL PAPERS
Special Issue on dataTEL
“Datasets and Data Supported Learning in Technology-Enhanced Learning”
International Journal of Technology Enhanced Learning (IJTEL)
ISSN (Online): 1753-5263<tel:1753-5263> - ISSN (Print): 1753-5255<tel:1753-5255>
Deadline of submissions: 25 October 2011
The prospect of great growth of open and linked data in the knowledge society creates opportunities for new insights through advanced analysis methods based on e.g., information extraction, filtering, and retrieval technologies. Educational institutions also create and own large datasets on their students’ and course activities. The analytic use of such data, however, is very limited, when considering new educational services, recommending suitable peers or content or processes or goals, and improving the personalization of learning. Nevertheless, personalized learning is expected to have the potential to create more effective learning experiences, and accelerate learners’ time-to-competence. In the educational world, the literature is sparse on how to build upon today’s very limited public datasets and how to accommodate the lack of agreed quality standards on the personalization of learning.
The special issue on dataTEL in IJTEL aims to address this issue by collecting high value research papers to develop a body of knowledge about data-based personalization of learning. So far, there is no consensus on algorithms that can be successfully applied to make reliable analyses of data in a specific learning setting. Having an initial collection of datasets, coupled with case studies of their use in TEL, could be a first major step towards a theory of personalisation within TEL that can be based on empirical experiments with verifiable and valid results.
However, data driven research confronts researchers with a new set of challenges, for instance, a lack of common dataset formats or policies to share educational datasets, a huge variety of different evaluation methods for comparing diverse personalization techniques, and new ethical and privacy issues that arise from the ability to link and mine information.
Therefore, the objective of this special issue is to explore suitable datasets for TEL – with a specific focus on recommender and information filtering systems that can take advantage of these datasets. In this context, new challenges emerge like unclear legal protection rights and privacy issues, suitable policies and formats to share data, required pre-processing procedures and rules to create sharable data sets, common evaluation criteria for recommender systems in TEL and how a data set driven future in TEL could look like.
Relevant topics include, but are not limited to:
- descriptions of datasets that can be used for experimentation
- descriptions of data experiments (methods or results of experiments)
- experiences with those datasets
- dealing with legal protection rights towards datasets on a European level
- privacy preservation for educational datasets
- methods of effective anonymisation of educational datasets
- management and pre-processing procedures for educational datasets
- future scenarios for educational datasets
- impact of educational datasets for learners, teachers, and parents
- mash-ups based on educational datasets
- recommender approaches that are based on educational data
- evaluation methodologies and metrics for educational recommender systems
SPECIAL ISSUE CO-EDITORS
Hendrik Drachsler, Open University, The Netherlands
Katrien Verbert, K.U. Leuven, Belgium
Miguel-Angel Sicilia, University of Alcalá, Spain
Nikos Manouselis, Agro-Know Technologies, Greece
Stefanie Lindstaedt, KnowCenter, Austria
Martin Wolpers, Fraunhofer Institute for Applied Information Technology, Germany
Riina Vuorikari, European Schoolnet, Belgium
Authors are invited to submit original unpublished research as papers. All submitted papers will be peer-reviewed by at least two members of the program committee for originality, significance, clarity, and quality. In addition, the authors are asked to contribute short abstracts of their submissions to the dataTEL group space at TELeurope.
Submission will be available through the EasyChair submission system:
Details of the journal, manuscript preparation are available on the here:
Any questions and submissions should be sent to:
REVIEW COMMITTEE (to be confirmed)
Erik Duval, K.U. Leuven, Belgium
Seda Gurses, K.U. Leuven, Belgium
Abelardo Pardo, University Carlos III of Madrid, Spain
Julià Minguillón, Open University of Catalonia, Spain
Olga Santos, aDeNu, Spanish National University for Distance Education, Spain
Julien Broisin, Université Paul Sabatier, France
Christoph Rensing, TU Darmstadt, Germany
Shlomo Berkovsky, CSIRO, Australia
John Stamper, Datashop, Pittsburgh Science of Learning Center, USA
Eelco Herder, Forschungszentrum L3S, Germany
Martin Memmel, DFKI, Germany
Xavier Ochoa, Escuela Superior Politécnica del Litoral, Ecuador
Fridolin Wild, KMI, Open University, UK
Wolfgang Reinhardt, University of Paderborn, Germany
Wolfgang Greller, Open Universiteit, The Netherlands
Marco Kalz, Open Universiteit, The Netherlands
Adriana Berlanga, Open Universiteit, The Netherlands
Peter Sloep, Open Universiteit, The Netherlands
Ralf Klamma, RWTH Aachen, Germany
Pythagoras Karampiperis, NCSR Demokritos, Greece
Giannis Stoitsis, IEEE, Greece
Submission of manuscripts: 25 October 2011<x-apple-data-detectors://6>
Completion of first review: 30 November 2011<x-apple-data-detectors://7>
Submission of revised manuscripts: 15 January 2011
Final decision notification: 10 February 2012<x-apple-data-detectors://9>
Publication date (tentative): February 2012
The manuscripts should be original, unpublished, and not in consideration for publication elsewhere at the time of submission to the International Journal on Technology-Enhanced Learning and during the review process.
Please carefully follow the author guidelines at <http://www.inderscience.com/mapper.php?id=31> http://www.inderscience.com/mapper.php?id=31while preparing your manuscript. To get familiarity with the style of the journal, please see a previous issue at <http://www.inderscience.com/browse/index.php?journalID=246> http://www.inderscience.com/browse/index.php?journalID=246
All manuscripts will be subject to the usual high standards of peer review. Each paper will undergo double blind review.