Archive for the ‘data mining’ Category
Learning Analytics is a hot research topic at the moment and I’m curious what impact it will have on the education systems on the long term. However, at the moment it is of high importance on all research agendas. It is even an explicit research topic within the next EU FP7 TEL call in January 2013. At CELSTEC we have recently won two new EU projects that are directly supporting our Learning Analytics research efforts:
|Open Discovery Space (Started 1st of April 2012)||LinkedUp (Start’s 1st of November 2012)|
Both projects addressing the main research challenges we identified during the dataTEL project. Based on those we have identified 6 main research objectives for the upcoming years:
- Collecting, sharing and open access to educational datasets
- Evaluation of data-driven applications
- Legal aspects (Ownership, Privacy, ethics)
- Visualizations of data
- Personalization and Recommender Systems
- Awareness support and reflection
Regarding research objective 1 – Educational data:
Open Discovery Space (ODS) and LinkedUP will make vast amounts of educational data available for end users and for data driven research. The Open Discovery Space project will be based on the ARIADNE Foundation infrastructure that has been used to already deploy an initial version of the resources at the portal that provide access to a critical mass of about 1.000.000 content resources. This existing critical mass of eLearning resources will be expanded over the runtime of the project up to ~1,550,000 resources in total. It is expected to be connected to around 15 educational portals of regional, national or thematic coverage. Besides providing the educational resources ODS will create technology to share and collect also social data about the educational resources (ratings, tags and annotations) and make them available as Linked Data. With these objectives ODS contributes to the research objectives of the Learning Analytics and Linked Data workshop we organized at the LAK12 conference.
LinkedUp also aims to make more educational datasets publicly accessible. It will therefore create a pool of existing educational datasets and organize various support and trainings activities around this data pool to stimulate the development of new and innovative data driven tools for Technology-Enhanced Learning and Learning Analytics.
LinkedUp will therefore strongly follow the Linked Data approach which has been applied successfully in a wide area of domains to expose datasets from a large variety of sources, leading to a globally distributed Web cloud of over 31 billion distinct statements. The following table provides an overview of the currently available datasets in the LOD cloud (source: http://lod-cloud.net/state).
Next to Open Educational Resources and Linked Data will LinkedUp also consider publicly accessible data from data-driven companies such as Open Calais Reuter or Mendeley. These companies provide access to their data over API’s that can be used to develop innovative data products within the LinkedUp competition.
Regarding research objective 2 – Evaluation of data applications:
There is a pressing need in Learning Analytics to make the effects of different data applications on learning and the stakeholders comparable to identify best practice examples. Until now there is no common knowledge which algorithm works better than another with a certain user model in a specific learning settings. LinkedUp directly address this challenge by developing an evaluation framework that can be applied to evaluated data –driven applications. The evaluation framework will be one of the major outcomes of the project. It will be developed together with a board of 30 experts in the field through the Group Concept Mapping approach.
Regarding research objective 3 – Legal aspects:
In this context both projects have to come up with solutions that enable the use of educational data for data support applications. Both projects will therefore mainly focus on the creative commons license model. All data sets for which this is appropriate shall be published on the project’s web site under a Creative Commons licence (http://creativecommons.org/) or another appropriate license. In addition we want to explore related initiatives like the Creative Commons Learning Resource Metadata Initiative (LRMI) that aims to merge different competing initiatives in the area of OER description and at producing a usable and well-defined RDF schema for Learning Resource description (http://wiki.creativecommons.org/LRMI). Regarding, privacy and ethics both projects will review privacy requirements and concerns in each participating country in order to develop a suitable IPR & licensing agreement for the data pools.
Research objectives 4-6 – Visualizations, Personalization, and Awareness support:
These research objectives will also be addressed by both projects at a later stage. ODS addresses all three research objectives by providing innovative navigation and visualizations tools to explore the vast amount of collected data within the ODS portal in a personalized way. We will investigate how to combine visualization and social navigation to increase the satisfaction of users when searching for resources as well as explaining the rationale for the various selections or recommendations. Within LinkedUp we will support various projects that focus on these research objectives within the LinkedUp competition. We will organize three data competitions and support the participants with suitable datasets, technology support workshops, and provide substantial funding based on the assessment of the tools of the participating teams with the evaluation framework.
Looking forward to these exciting research activities!
Below the presentation of the paper written by Wolfgang Greller and myself and our international survey on Confidence in Learning Analytics at the LAK12 conference, Vancouver, Canada. The framework study was rated by many stakeholders as very helpful to describe the current needs of the young Learning Analytics field. There are quite some pointers to the study made by other researchers like SURF or OUUK. It was rested as the most helpful model to introduce learning analytics and the core research challenges to related stakeholders.
The article reported the results of an exploratory community survey in learning analytics that aimed at extracting the perceptions, expectations and levels of understanding of stakeholders in the domain. Divided up into six different dimensions we came to a number of conclusions which we are going to present below.
- Stakeholders: Participants identified the main beneficiaries in learning analytics as learners and teachers followed by organisations. Furthermore, the majority of respondents agreed that the biggest benefits would be gained in the teacher-to-student relationship and that learners would almost certainly require teacher help to learn from an analysis and for taking the right course of action. This is rather surprising as learning analytics is seen by many researchers as an innovative liberating force that would be able to change traditional learning by reflection and peer support, thus strengthening independent and lifelong learning. This latter opinion on independence could be seen in the ‘objective’ section of the survey (cf. chapter 3.2 above) where the majority expressed a preference for learning analytics to pay special attention to non-formalised and innovative ways of teaching and learning. Yet, respondents expect less potential impact on the student-to-student and the teacher-to-teacher relationships. This current perspective may be affected by the scarcity of learning analytics applications that demonstrate the innovative possibilities for learning and teaching. Thus people may not have a clear point of reference as, for example, is the case for ‘social networks’ where an established group of competitive platforms already exists.
- Objectives: The survey concludes further that research on learning analytics should focus on reflection support. The attained results clearly emphasized the importance of ‘stimulating reflection in the stakeholders about their own performance’. This goal could be supported by revealing hitherto hidden information about learners, which was the second most important objective. At the same time more timely information, institutional insights, and insights into the learning context were other areas of interest to the constituency.
- Data: Our institutional inventory in chapter 3.3 gives an overview of the most widespread IT systems. These could be prioritised by learning analytics technologies to gain an institutional foothold. They also provide the best ground for inter-institutional data sharing. Anonymisation can perhaps be seen as the most important enabler for such sharing to happen. It is emphasised in a number of responses as the second most important data attribute and confirmed in the willingness of people to share if data is anonymised. For a clear majority anonymisation also reduces fears of privacy breaches through sharing (cf. chapter 3.5). On the other hand, when it comes to internal sharing with departments and operations’ units of the same institution, the use of available data will continue to be an uphill struggle, and, according to participants, require good justification. Here, perhaps, a clearer mandate to ethical boards may help. These are already widely in place.
- Methods: Chapter 3.4 on methods revealed that trust in learning analytics algorithms is not well developed. We interpret the mid-range return levels as hesitation towards “calculating” education and learning. What seems interesting to us is that the widely interpretable hope for gaining a comprehensive view on the learning progress was given the highest confidence, but perhaps this shows wishful thinking rather than a real expectation. Overall rather low was the expectations of impact on assessment. A majority of people did not see easier or more objective assessments coming out of learning analytics (cf. chapter 3.2). They were also not fully convinced that it would provide a good assessment of a learner’s state of knowledge (cf. chapter 3.4).
- Constraints: A large proportion of respondents thought learning analytics may lead to breaches of privacy and intrusion. Yet, they ranked privacy and ethical aspects as of lesser importance to consider (cf. chapter 3.5) or as belonging to further competence development (cf. chapter 3.6). However, data ownership was expressed as highly important. This may be interpreted in that way that if ownership of data lies with the learners themselves, there is no perceived risk for privacy or ethical abuse. In any case, it seems that many organisations have ethical boards and guidelines in place. These may come to play an increasingly important role for institutional data exploitation since a large number of respondents trust that anonymisation of educational data is possible but not necessarily sufficient to enable full internal exploitation of the educational data within an organisation.
- Competences: In the area of competences, participants mainly stressed the importance of self-directedness, critical reflection, analytic skills, and evaluation skills. On the other hand, few believe that students already possess these skills. This indicates to us a need to support students in developing these learning analytics competences. In conclusion of this section we can say, that the results suggest that there is little faith that learning analytics will lead to more independence of learners to control and manage their learning process. This identifies a clear need to guide students to more self-directedness and critical reflection if learning analytics should be applied more broadly in education. This interpretation is quite in contrast with some suggestions made with respect to empowerment of learners through providing graphical reflection of the learning process and further access to additional information regarding their learning progress.
The dataset used for this article and a pre-print of the study is available at the dspace.ou.nl repository (at http://dspace.ou.nl/handle/1820/3850). In that way, we would like to encourage the learning analytics community to gain additional insights from our dataset for the fast evolving of the learning analytics research topic.
OUNL is really a corner stone at LAK12 conference with 2 workshops and 2 full papers. Together with some international colleagues (George Siemens, Dragan Gasevic, Stefan Dietze, Wolfgang Reinhard, and Abelardo Pardo) we organized a full day workshop on ‘Linked Data and Learning Analytics – #LALD’ at the LAK12.
LALD is a very visionary workshop that assumes that linked datasets will become increasingly important for data driven research. We envision that in a close future research will take advantage of kind of configuration files that create linked datasets that can be used for data driven research. At the moment Learning Analytics and data research lack publicly available datasets to test and compare their findings. The main objective of the 1st International Workshop on Learning Analytics and Linked Data (#LALD2012) is to connect the research efforts on Linked Data and Learning Analytics in order to create visionary ideas and foster synergies between the two young research fields.
Below you can find the slides we used dunking the workshop.
Dragan’s slides on semantic web are here: [here]
Representation of the data is critical to sense making: [here]
Learning analytics and guidelines for ethical use: [here]
The connectivist pedagogy, a concept that addressed the potential of Learning Analytics and linked data:
Anderson and Dron 2011 [here]
Mendeley is recruiting a Marie Curie Senior Research Fellow. Your primary responsibility will be to ensure that Mendeley’s research catalogue (i.e. collection of articles) is of high quality. Mendeley has crowdsourced the world’s largest research catalogue with over 50 million unique articles contributed by almost two million users over a period of four years. With your expert knowledge in data technologies and algorithms, you will take ownership of this catalogue, and work on innovative techniques for improving its quality. Your work should result in a cleaner, better structured and more scalable catalogue.
This position is part of the TEAM project (http://team-project.tugraz.at). You will spend 1 year in Mendeley’s London office before spending 1 year at TU Graz, the Knowledge Management Institute (http://kmi.tugraz.at/), Austria, collaborating with a top-class team. You will be passionate about working with large scale data collections and take pride in producing high quality data.
Ensure that the research catalogue is of high quality
Understand, maintain and help develop current crowdsourcing system
Disseminate results from your work both internally and externally
What you’ll be doing
Crowdsourcing a homogeneous catalogue from heterogeneous data sources, using modern data techniques
Identifying data sources, judging their appropriateness and working with data engineers to import them into the catalogue
Working with Data Engineers and Platform Team to make reliable/scalable systems
Working with Data Architect to ensure coherent data mapping, ontologies and schemas
Working with Mendeley’s Chief Scientist in contributing to solving data problems outside of the scope of catalogue crowdsourcing
Working 1 year from Mendeley’s London office, followed by 1 year in TU Graz before returning to London, with regular travel between both locations
What you should bring
PhD in the field of Computer Science or 4-10 years of full-time research (following first publication)
Expert knowledge of text and document processing, with strong machine learning background
Experience working with large-scale catalogues
Database integration experience
2+ years of Java programming; can independently prototype solutions to problems
Experience with big data technologies (e.g. Hadoop, MapReduce, NoSQL)
Unix skills, preferably Linux
Fluent spoken and written English
Strong presentation skills in communicating with experts and novices
What we offer
Salary of £50k per annum + stock options
No out-of-hours support expected
25 days holidays
Company benefits such as: cycle to work scheme, childcare vouchers, BUPA (private healthcare), Friday beer o’clocks (snacks and drinks on the house), free breakfast, monthly team night’s out, annual events (Christmas party and summer barbecue)
Working in a great environment in a central London office with roof terrace
Nationality: The researcher may be a national of a Member State of the Union, of an Associated Country or of any other third country
Mobility: At the time of selection, the researcher must not have resided or carried out his/her main activity in the country of the beneficiary home organisation for more than 12 months in the 3 years immediately prior to his/her selection under the project. International European interest organisations or international organisations.
The appointed researcher must not have spent more than 12 months in the 3 years immediately prior to the selection by the home organisation in the same appointing organisation.
If you are interested, send your CV and cover letter to jobs [at] mendeley [dot] com. If you are selected for an interview, we will let you know within two weeks.
We recently submitted the final version of a book on “Recommender Systems for Learning” (#RSFL) to Spinger (to appear soon in 2012) that focus on the past 10 years of research on recommender systems in technology-enhanced learning (TEL).
We introduced recommender systems and compared them to relevant work in TEL like adaptive educational hypermedia, learning networks, educational data mining and learning analytics. Then we emphasised on TEL as a recommendation problem, discussing how the recommendation problem is defined, which the recommendation goals are, and what the recommendation context usually covers as context.
We reviewed existing TEL datasets that may be used to support experimentation and testing, as well as discussed about how they can drive relevant research. We reported an extensive analysis of existing recommender systems that can be found in the literature for educational applications. And finally, we reflected on some major challenges that we see as important to be faced in the years to come, also outlining some potential directions of future research.
All the bibliography covered by this book is also available in an open Mendeley group with the same name “Recommender Systems for Learning“and will continue to be enriched with additional references. We would like to encourage the reader to sign up for this group and to connect to the community of people working on this topic, having access to the collected bibliography but also contributing pointers to new relevant publications within this very fast emerging research field.
Science 2.0 deals with the involvement of the web in science. It spans from the utilization of Web 2.0 tools and technologies in research to a more open and sharing approach to science. Some definitions of Science 2.0 even include notions of a methodological change due to the abundance of data, and the nature of the socio-technical systems on the web. For this special track, we would like to address four issues in Science 2.0 that have proven both promising and challenging at the same time:
1. The management of scientific data, both primary and secondary data (such as publication metadata, and other scientific content on the web) as a precondition for Science 2.0.
2. The recommendation of people and resources as a consequential next step in an exponentially growing scientific environment.
3. Quantitative and qualitative analysis of science based on data from scholarly communication on the web.
4. The change in scientific practices due to the involvement of Science 2.0 tools and technologies in the research process and the effects this has on science itself.
Topics of interest include but are not limited to:
* Definition of data schemes and interoperability formats
* Semantic Web standards for Science 2.0
* Social mining and metadata extraction in academic resources
* Metadata quality and quality assessment
* Design and architecture of data sharing facilities
* Systems design accounting for standardized data sets
* Applications for recommendation in science
* Specific challenges for recommendation in science
* Information retrieval in academic papers
* Recommendation algorithms and quality indicators
* Changes in scientific practices due to Web 2.0
* Methodological issues and interdisciplinarity in Science 2.0
* Opportunities and threats for researchers and research organizations
* Applications in and for Science 2.0
* Awareness-support for Science 2.0 activities
* Crowd-sourcing in science
* Robust methods for dealing with noisy crowd sourced data
30 April 2012: Submission of full papers (8 pages) and demos (4 pages)
31 May 2012: Notification of acceptance
30 June 2012: Camera ready version (8 pages)
5 Sept.-7 Sept. 2012: i-KNOW 2012 Conference
We are inviting research papers of up to 8 pages including references and an optional appendix. Furthermore, we invite demos for the special track. Demo submissions should consist of a 4 page description that allows us to judge the quality of your demonstration. The Conference Proceedings of i-KNOW 2012 will be published by ACM ICPS.
Paper Submission Details: http://i-know.tugraz.at/i-science/paper-submission
In case of problems or questions concerning the submission of papers, please contact the track chairs at pkraker[at]know-center.at
Notification of Acceptance and Publishing
Authors of accepted papers will be notified by 31 May 2012. Accepted papers and demos will be included in the Conference Proceedings. The Conference Proceedings of i-KNOW 2012 will be published by ACM ICPS. At least one author of an accepted paper must register for i-KNOW 2012 before the deadline for camera ready versions (30 June 2012) in order to get the paper published in the conference proceedings.
Chairs of Science 2.0
The organization team of the Science 2.0 Special Track consists of the following people:
* Peter Kraker, Know-Center Graz (Austria)
* Roman Kern, Know-Center Graz (Austria)
* Kris Jack, Mendeley (UK)
* Hendrik Drachsler, Open Universiteit Nederland (Netherlands)
* Erik Duval, Katholieke Universiteit Leuven (Belgium)
* Olivier Ferret, CEA Saclay Nano-INNOV (France)
* Michael Granitzer, University of Passau (Germany)
* Greg Grefenstette, Exalead (France)
* Paul Groth, VU University of Amsterdam (Netherlands)
* Denis Gillet, …cole Polytechnique FÈdÈrale de Lausanne (Switzerland)
* Min-Yen Kan, National University of Singapore (Singapore)
* Daniel Lemire, LICEF Research Center (Canada)
* Jean-Louis LiÈvin, ideXlab (France)
* Isabella Peters, Heinrich-Heine-Universit‰t D¸sseldorf (Germany)
* Jason Priem, University of North Carolina (United States)
* Wolfgang Reinhardt, University of Paderborn (Germany)
* Katrin Weller, Heinrich-Heine-Universit‰t D¸sseldorf (Germany)
* Fridolin Wild, The Open University (UK)
CALL FOR PAPERS
1st International Workshop on Learning Analytics and Linked Data (#LALD2012)
in conjunction with the 2nd Conference on Learning Analytics and Knowledge (LAK’12)
29.04. – 02.05.2012, Vancouver (Canada).
Jointly organized by the http://linkededucation.org initiative and the EATEL SIG dataTEL (http://bit.ly/datatel).
Workshop website: http://lald.linkededucation.org/
Submission deadline full and short papers: 14.03.2012
Submission deadline extended abstracts : 10.04.2012
The main objective of the 1st International Workshop on Learning Analytics and Linked Data (#LALD2012) is to connect the research efforts on Linked Data and Learning Analytics to create visionary ideas [a] and foster synergies between both young research fields. Therefore, the workshop will collect, explore, and present datasets, technologies and applications [b] for Technology-Enhanced Learning (TEL) to discuss Learning Analytics approaches which make use of educational data or Linked Data sources. During the workshop, an overview of available educational datasets and related initiatives will be given. The participants will have the opportunity to present their own research with respect to educational datasets, technologies and applications and discuss major challenges to collect, reuse and share these datasets.
In TEL, a multitude of datasets exists containing detailed observations of events in learning environments [c]that offer new opportunities for teaching and learning. The available datasets can be roughly distinguished between (a) Linked Data – Open Web Data and (b) Personal learning data from different learning environments.
Open Web data covers educational data publicly available on the Web, such as Linked Open Data (LOD) published by institutions about their courses and other resources; examples include (but are not limited to), The Open University (UK), the National Research Council (CNR, Italy), Southampton University (UK) or the mEducator Linked Educational Resources. It also includes the emergence of LD-based metadata schemas and TEL-related datasets. The main driver in the adoption of the LOD approach in the educational domain is the enrichment of the learning content and the learning experience by making use of various connected data sources.
Personal learning data from learning environments originate from tracking learners’ interactions with tools, resources or peers[d]. The main driver for analyzing these data is the vision of personalized learning that offers potential to create more effective learning experiences through new possibilities for predicting and reflecting the individual learning process.
To this end, Learning Analytics can be seen as an approach which brings together two different views: (i) the external view on publicly available Web data and (ii) an internal view on personal learner data, e.g. data about individual learning activities and histories. Learning Analytics aims at combining these two in a smart and innovative way to enable advanced educational services, such as recommendation (a) of suitable educational resources to individual learners, (b) peer students or external expert to cooperate with.
The workshop is looking for contributions touching the following topics.
Educational (Linked) Data
- Evaluating, promoting, creating and clustering of educational datasets, schemas and vocabularies
- Use of LOD for educational purposes
- Feasibility of standardization of educational datasets to enable exchange and interoperability
- Sharing of educational datasets among TEL researchers
- Technologies for the exploration of educational datasets, i.e., for filtering, interlinking, exposing, adapting, converting and visualizing educational datasets
- Real-world applications that show a measurable impact of Learning Analytics
- Real-world educational applications that exploit the Web of Data
- Tools to use and exploit educational Linked Open Data[e]
- Innovative TEL applications that make large-scale use of the available open Web of data
Evaluation of Technologies and Datasets:
- (Standardized) evaluation methods for Learning Analytics
- Descriptions of data competitions
Privacy and Ethics:
- Policies on ethical implications of using educational data for learning analytics (privacy and legal protection rights)
- Guidelines for the anonymisation and sharing of educational data for Learning Analytics research
The workshop is looking for different types of submissions. We accept regular full paper (8-14 pages), short paper (4-6 pages). Moreover, we are interested in anonymized datasets that can then be openly used in evaluating TEL recommender systems. Above all, we encourage you to demonstrate your data products and tools even if they are in a premature state. Datasets and demonstrations should be submitted together with an extended abstract submissions (up to 2 pages). For all paper submissions we require formatting according to the Springer LNCS template http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0
Submission should be submitted through the conference management tool ginkgo: http://ginkgo.cs.upb.de/events/lald12
All submitted papers will be peer-reviewed by at least two members of the program committee for originality, significance, clarity, and quality. Final versions of accepted submissions will be published in the CEUR-WS.org workshop proceedings and most promising contributions will be invited to the 2nd Special Issue on dataTEL at the International Journal of Technology Enhanced Learning (IJTEL). In addition, the authors are asked to contribute short summaries of their submissions to the dataTEL group space at TELeurope to encourage early information sharing and discussion also with third persons. Based on workshop submissions, the organizers will identify most pressing research challenges to structure the workshop.
Questions can be send to: hendrik.drachsler[at]ou.nl
14.03.2012 Submission deadline for full and short papers
10.04.2012 Submission deadline for extended abstracts (describing data sets and demonstrations)
12.04.2012 Notification of acceptance
26.04.2012 Submission deadline for final papers
30.04. – 02.05.2012 LAK Conference
Hendrik Drachsler; Open University of the Netherlands, NL
Stefan Dietze; L3S Research Center, DE
Mathieu d’Aquin; The Open University, UK
Wolfgang Greller; Open University of the Netherlands, NL
Jelena Jovanovic; University of Belgrade, SR
Abelardo Pardo; University Carlos III of Madrid, ES
Wolfgang Reinhardt; University of Paderborn, DE
Katrien Verbert; K.U.Leuven, BE
PROGRAMME COMMITTEE :
Markus Specht, Open University of the Netherlands, The Netherlands
Peter Sloep, Open University of the Netherlands, The Netherlands
Marco Kalz, Open University of the Netherlands, The Netherlands
Christian Glahn, ETH Zuerich, Switzerland
Erik Duval, K.U. Leuven, Belgium
Martin Wolpers, FIT Fraunhofer, Germany
Nikos Manouselis, Agro-Know Technologies, Greece
Olga Santos, aDeNu Research Group, UNED, Spain
Dragan Gasevic, Athabasca University, Canada
Felix Mödritscher, Vienna University of Economics and Business, Austria
Fridolin Wild, Open University, United Kingdom
Gawesh Jawaheer, City University London, United Kingdom
Ebner Hannes, Royal Institute of Technology (KTH), Sweden
Hanan Ayad, Desire2Learn, Canada
Melody Siadaty, Athabasca University, Canada
Philippe Cudré-Mauroux, University of Fribourg, Switzerland
Carsten Keßler, University of Münster, Germany
Davide Taibi, Institute for Educational Technologies, Italian National Research Council, Italy
Tom Heath, Talis, UK
CfP dataTEL SI at International Journal on Technology Enhanced Learning (IJTEL) deadline for submission 25.10.2011
CALL FOR JOURNAL PAPERS
Special Issue on dataTEL
“Datasets and Data Supported Learning in Technology-Enhanced Learning”
International Journal of Technology Enhanced Learning (IJTEL)
ISSN (Online): 1753-5263<tel:1753-5263> - ISSN (Print): 1753-5255<tel:1753-5255>
Deadline of submissions: 25 October 2011
The prospect of great growth of open and linked data in the knowledge society creates opportunities for new insights through advanced analysis methods based on e.g., information extraction, filtering, and retrieval technologies. Educational institutions also create and own large datasets on their students’ and course activities. The analytic use of such data, however, is very limited, when considering new educational services, recommending suitable peers or content or processes or goals, and improving the personalization of learning. Nevertheless, personalized learning is expected to have the potential to create more effective learning experiences, and accelerate learners’ time-to-competence. In the educational world, the literature is sparse on how to build upon today’s very limited public datasets and how to accommodate the lack of agreed quality standards on the personalization of learning.
The special issue on dataTEL in IJTEL aims to address this issue by collecting high value research papers to develop a body of knowledge about data-based personalization of learning. So far, there is no consensus on algorithms that can be successfully applied to make reliable analyses of data in a specific learning setting. Having an initial collection of datasets, coupled with case studies of their use in TEL, could be a first major step towards a theory of personalisation within TEL that can be based on empirical experiments with verifiable and valid results.
However, data driven research confronts researchers with a new set of challenges, for instance, a lack of common dataset formats or policies to share educational datasets, a huge variety of different evaluation methods for comparing diverse personalization techniques, and new ethical and privacy issues that arise from the ability to link and mine information.
Therefore, the objective of this special issue is to explore suitable datasets for TEL – with a specific focus on recommender and information filtering systems that can take advantage of these datasets. In this context, new challenges emerge like unclear legal protection rights and privacy issues, suitable policies and formats to share data, required pre-processing procedures and rules to create sharable data sets, common evaluation criteria for recommender systems in TEL and how a data set driven future in TEL could look like.
Relevant topics include, but are not limited to:
- descriptions of datasets that can be used for experimentation
- descriptions of data experiments (methods or results of experiments)
- experiences with those datasets
- dealing with legal protection rights towards datasets on a European level
- privacy preservation for educational datasets
- methods of effective anonymisation of educational datasets
- management and pre-processing procedures for educational datasets
- future scenarios for educational datasets
- impact of educational datasets for learners, teachers, and parents
- mash-ups based on educational datasets
- recommender approaches that are based on educational data
- evaluation methodologies and metrics for educational recommender systems
SPECIAL ISSUE CO-EDITORS
Hendrik Drachsler, Open University, The Netherlands
Katrien Verbert, K.U. Leuven, Belgium
Miguel-Angel Sicilia, University of Alcalá, Spain
Nikos Manouselis, Agro-Know Technologies, Greece
Stefanie Lindstaedt, KnowCenter, Austria
Martin Wolpers, Fraunhofer Institute for Applied Information Technology, Germany
Riina Vuorikari, European Schoolnet, Belgium
Authors are invited to submit original unpublished research as papers. All submitted papers will be peer-reviewed by at least two members of the program committee for originality, significance, clarity, and quality. In addition, the authors are asked to contribute short abstracts of their submissions to the dataTEL group space at TELeurope.
Submission will be available through the EasyChair submission system:
Details of the journal, manuscript preparation are available on the here:
Any questions and submissions should be sent to:
REVIEW COMMITTEE (to be confirmed)
Erik Duval, K.U. Leuven, Belgium
Seda Gurses, K.U. Leuven, Belgium
Abelardo Pardo, University Carlos III of Madrid, Spain
Julià Minguillón, Open University of Catalonia, Spain
Olga Santos, aDeNu, Spanish National University for Distance Education, Spain
Julien Broisin, Université Paul Sabatier, France
Christoph Rensing, TU Darmstadt, Germany
Shlomo Berkovsky, CSIRO, Australia
John Stamper, Datashop, Pittsburgh Science of Learning Center, USA
Eelco Herder, Forschungszentrum L3S, Germany
Martin Memmel, DFKI, Germany
Xavier Ochoa, Escuela Superior Politécnica del Litoral, Ecuador
Fridolin Wild, KMI, Open University, UK
Wolfgang Reinhardt, University of Paderborn, Germany
Wolfgang Greller, Open Universiteit, The Netherlands
Marco Kalz, Open Universiteit, The Netherlands
Adriana Berlanga, Open Universiteit, The Netherlands
Peter Sloep, Open Universiteit, The Netherlands
Ralf Klamma, RWTH Aachen, Germany
Pythagoras Karampiperis, NCSR Demokritos, Greece
Giannis Stoitsis, IEEE, Greece
Submission of manuscripts: 25 October 2011<x-apple-data-detectors://6>
Completion of first review: 30 November 2011<x-apple-data-detectors://7>
Submission of revised manuscripts: 15 January 2011
Final decision notification: 10 February 2012<x-apple-data-detectors://9>
Publication date (tentative): February 2012
The manuscripts should be original, unpublished, and not in consideration for publication elsewhere at the time of submission to the International Journal on Technology-Enhanced Learning and during the review process.
Please carefully follow the author guidelines at <http://www.inderscience.com/mapper.php?id=31> http://www.inderscience.com/mapper.php?id=31while preparing your manuscript. To get familiarity with the style of the journal, please see a previous issue at <http://www.inderscience.com/browse/index.php?journalID=246> http://www.inderscience.com/browse/index.php?journalID=246
All manuscripts will be subject to the usual high standards of peer review. Each paper will undergo double blind review.
ICSS — the Hawaii International Conference on System Sciences — takes place each January in Hawaii. This coming year, the conference is January 4-7, at the Grand Wailea hotel, on the island of Maui.
This is a call for papers for a new minitrack on Learning Analytics and Networked Learning. Papers are due **June 15, 2011** submitted through the conference system. Please feel free to contact me or either of my co-organizers for feedback on suitability for the minitrack.
Other minitracks will be of interest to members of this list, including ‘Social Networking and Communities’ co-chaired by Karine Nahon and Caroline Haythornthwaite. (See:http://haythorn.wordpress.com/hicss-minitracks-cfp/).
CALL FOR PAPERS
LEARNING ANALYTICS & NETWORKED LEARNING
This minitrack calls for papers that address leading edge use of technology or system design to analyze, support, and/or create learning and learning environments. The remit is wide and calls for papers that use technology to examine how social learning happens, use data from learning environments to support learning processes, and examine new practices of formal and informal learning on and through the Internet. Papers that fit this minitrack fall under new and ongoing areas of learning research that may be referred to as learning analytics, networked learning, technology enhanced learning, computer-supported collaborative learning, ubiquitous learning, and mobile learning. Of particular interest are papers that capture, analyze and show novel use of data produced from online learning environments, develop and/or test methodologies for analyzing online learning, address automated data collection and analysis in support of learning, professional development and knowledge creation, and discuss issues and opportunities relating to information literacy, literacy and new media, ubiquitous learning, entrepreneurial learning and/or mobile learning.
We envision papers that
• address the use of automated data capture to follow and analyze learning processes
• develop methodologies for analyzing online learning
• develop metrics for characterizing and following learning trends online
• test the validity of automated data for capturing a true representation of learning and knowledge creation
• analyze and/or support the role of social networks in learning
• report on the development and maintenance of innovative online environments for learning
• discuss trends in learning on and through the Internet, including issues and opportunities relating to information literacy, literacy and new media, ubiquitous learning and entrepreneurial learning
• examine economic models, trends and markets for online learning, including open source and open access models
• examine the foundations for learning in online networks, crowds and communities
• examine the design and facilitation of learning in online networks, crowds and communities
• examine the validity of information and learning processes online, and trust in online information sources for learning
• address the role of particular devices: laptops, mobiles, OLPC in learning
• examine trends in how we learn with and through technology in secondary and higher education, workplaces, society, developed and underdeveloped nations
• discuss ethical issues relating to learning online, including issues relating to data capture, analysis and display, and learning about controversial subjects or anti-social activities.
SUBMIT INQUIRIES TO:
Caroline Haythornthwaite (Primary Contact)
University of British Columbia
Maarten de Laat
Open University of the Netherlands
University of British Columbia
- Learning Analytics: A foundation for informed change in education (elearnspace.org)
- Learning Analytics: The Holy Grail for Education? (kylemackie.wordpress.com)
- Professional Learning Network (ccmdctechnology.wordpress.com)
- Thoughts On A Learning Network (educationinnovation.typepad.com)
The RecSysTEL workshop that was sponsored by dataTEL and the Organic.Edunet project was really an exciting event. It was a big step forward for the RecSys research in TEL esp. on: 1. Extending the research community on RecSysTEL, 2. Changing the way RecSysTEL research will be conducted in the future.
Regarding 1, we took advantage of the lucky situation that the ECTEL and the ACM RecSys conference were taking place in the same week in Barcelona. A great opportunity to combine both research communities in one workshop. In the end we created a kind of own mini conference with some core people that attended both workshop days and a wider audience from both research communities that attended one particular day. People traveled between the ECTEL location and the ACM RecSys location so we did not only link the people virtually . Furthermore, we had a keynote from Joseph Konstan, Grouplens research and Kris Jack form the Mendeley startup. Joseph keynote talk was highly appreciated by the ECTEL community and really had an impact on the ongoing research in TEL recommenders.
Kris presented the Mendeley reference system and their datasets that they released in cooperation with the our dataTEL dataset challenge. We recorded both talks and will broadcast them soon.
Regarding 2, one special focus of the RecSysTEL workshop was on datasets that can be used for RecSys research. In order to collect relevant TEL related datasets, the first dataTEL challenge was launched as part of the RecSysTEL workshop. In this sub-call of the workshop, research groups were invited to submit existing datasets from TEL applications that can be used for research purposes on recommender systems for TEL. We opened the first ‘dataTEL system marketplace’ at the 1st day of the workshop.
You can find an overview of the presented datasets in a posting of Guenter Beham in the dataTEL group space at TELeurope.eu.
The most pressing topics in this session were the need to find a standardized meta data structure to exchange datasets, how we can deal with privacy and legal protection rights, anonymization of datasets, pre-processing of datasets, and shared evaluation metrics to compare the effects of TEL recommender systems. This very interactive session was a big step for the community and kept the people in a very crowed room without any window until 6:30 pm, whereas the sun was shining outside in beautiful Barcelona!
It became clear that the ultimate goal of the RecSysTEL research is a research infrastructure where researchers can find well documented and version controlled datasets of different research institutes ranging from formal to informal learning applications. Every RecSysTEL research needs to reference a publicly available dataset to make its results repeatable and comparable and to describe its contribution to the improvement of learning.
The current research practice is mainly based on small-scale experiments in which a few learners are asked to rate the relevance of suggested resources in a controlled experiment. Whereas such experiments offer valuable insights into the usefulness and relevancy of recommender systems for learning, stronger conclusions about the validity and generalizability of recommender experiments are needed in order to create a theory of personalization in TEL.
A theory of personalization in TEL needs more verifiable and repeatable experiments that allow the comparisons of results based on datasets that capture learner interactions. A dataset collection / infrastructure could support researchers to create repeatable experiments to gain valid and comprehensive knowledge about how certain recommender system algorithms perform on certain datasets in a particular learning setting.
The impact of the workshop on the research community is already visible on an increasingly amount of comparison studies that are currently conducted by different research units. The current studies are still quite basic as they apply traditional collaborative filtering algorithms on different educational datasets and report the results of these studies. BUT this is the right way we have to follow to gain valid knowledge on the impact of recommender and personalization of learning.
Based on the big success of the workshop we organized a follow-up workshop at the upcoming ARV2011 in March. Again we will have a two-day workshop, again we will have exciting keynote speakers, and again we will have very exiting contributions, so good signs for another outstanding event.
Pressing topics are:
- publicly available data sets for educational systems
- dealing with legal protection rights towards data sets on a European level
- privacy preservation for educational data sets
- methods of effective anonymization of educational data sets
- management and pre-processing procedures for educational data sets
- future scenarios for educational data sets
- impact of educational data sets for learners and teachers
- mash-ups based on educational data sets
- recommender approaches that are based on educational data
- evaluation methodologies and metrics for educational recommender systems
Besides these topics we are planning a 1st dataTEL competition where different research units will have the opportunity to compete with their algorithms on specific educational datasets. Therefore, we are in contact with Shlomo Berkovsky (ICT Centre, Hobart) who organizes the CAMra competition on context aware recommender systems. With the dataset competition we want to attract also people from other research communities like ACM RecSys but also EDM (Educational Data Mining) and other information retrieval communities to work on educational datasets and increase the knowledge base on personalization technologies in TEL.
You are currently browsing the archives for the data mining category.