Archive for April, 2012
Below the presentation of the paper written by Wolfgang Greller and myself and our international survey on Confidence in Learning Analytics at the LAK12 conference, Vancouver, Canada. The framework study was rated by many stakeholders as very helpful to describe the current needs of the young Learning Analytics field. There are quite some pointers to the study made by other researchers like SURF or OUUK. It was rested as the most helpful model to introduce learning analytics and the core research challenges to related stakeholders.
The article reported the results of an exploratory community survey in learning analytics that aimed at extracting the perceptions, expectations and levels of understanding of stakeholders in the domain. Divided up into six different dimensions we came to a number of conclusions which we are going to present below.
- Stakeholders: Participants identified the main beneficiaries in learning analytics as learners and teachers followed by organisations. Furthermore, the majority of respondents agreed that the biggest benefits would be gained in the teacher-to-student relationship and that learners would almost certainly require teacher help to learn from an analysis and for taking the right course of action. This is rather surprising as learning analytics is seen by many researchers as an innovative liberating force that would be able to change traditional learning by reflection and peer support, thus strengthening independent and lifelong learning. This latter opinion on independence could be seen in the ‘objective’ section of the survey (cf. chapter 3.2 above) where the majority expressed a preference for learning analytics to pay special attention to non-formalised and innovative ways of teaching and learning. Yet, respondents expect less potential impact on the student-to-student and the teacher-to-teacher relationships. This current perspective may be affected by the scarcity of learning analytics applications that demonstrate the innovative possibilities for learning and teaching. Thus people may not have a clear point of reference as, for example, is the case for ‘social networks’ where an established group of competitive platforms already exists.
- Objectives: The survey concludes further that research on learning analytics should focus on reflection support. The attained results clearly emphasized the importance of ‘stimulating reflection in the stakeholders about their own performance’. This goal could be supported by revealing hitherto hidden information about learners, which was the second most important objective. At the same time more timely information, institutional insights, and insights into the learning context were other areas of interest to the constituency.
- Data: Our institutional inventory in chapter 3.3 gives an overview of the most widespread IT systems. These could be prioritised by learning analytics technologies to gain an institutional foothold. They also provide the best ground for inter-institutional data sharing. Anonymisation can perhaps be seen as the most important enabler for such sharing to happen. It is emphasised in a number of responses as the second most important data attribute and confirmed in the willingness of people to share if data is anonymised. For a clear majority anonymisation also reduces fears of privacy breaches through sharing (cf. chapter 3.5). On the other hand, when it comes to internal sharing with departments and operations’ units of the same institution, the use of available data will continue to be an uphill struggle, and, according to participants, require good justification. Here, perhaps, a clearer mandate to ethical boards may help. These are already widely in place.
- Methods: Chapter 3.4 on methods revealed that trust in learning analytics algorithms is not well developed. We interpret the mid-range return levels as hesitation towards “calculating” education and learning. What seems interesting to us is that the widely interpretable hope for gaining a comprehensive view on the learning progress was given the highest confidence, but perhaps this shows wishful thinking rather than a real expectation. Overall rather low was the expectations of impact on assessment. A majority of people did not see easier or more objective assessments coming out of learning analytics (cf. chapter 3.2). They were also not fully convinced that it would provide a good assessment of a learner’s state of knowledge (cf. chapter 3.4).
- Constraints: A large proportion of respondents thought learning analytics may lead to breaches of privacy and intrusion. Yet, they ranked privacy and ethical aspects as of lesser importance to consider (cf. chapter 3.5) or as belonging to further competence development (cf. chapter 3.6). However, data ownership was expressed as highly important. This may be interpreted in that way that if ownership of data lies with the learners themselves, there is no perceived risk for privacy or ethical abuse. In any case, it seems that many organisations have ethical boards and guidelines in place. These may come to play an increasingly important role for institutional data exploitation since a large number of respondents trust that anonymisation of educational data is possible but not necessarily sufficient to enable full internal exploitation of the educational data within an organisation.
- Competences: In the area of competences, participants mainly stressed the importance of self-directedness, critical reflection, analytic skills, and evaluation skills. On the other hand, few believe that students already possess these skills. This indicates to us a need to support students in developing these learning analytics competences. In conclusion of this section we can say, that the results suggest that there is little faith that learning analytics will lead to more independence of learners to control and manage their learning process. This identifies a clear need to guide students to more self-directedness and critical reflection if learning analytics should be applied more broadly in education. This interpretation is quite in contrast with some suggestions made with respect to empowerment of learners through providing graphical reflection of the learning process and further access to additional information regarding their learning progress.
The dataset used for this article and a pre-print of the study is available at the dspace.ou.nl repository (at http://dspace.ou.nl/handle/1820/3850). In that way, we would like to encourage the learning analytics community to gain additional insights from our dataset for the fast evolving of the learning analytics research topic.
OUNL is really a corner stone at LAK12 conference with 2 workshops and 2 full papers. Together with some international colleagues (George Siemens, Dragan Gasevic, Stefan Dietze, Wolfgang Reinhard, and Abelardo Pardo) we organized a full day workshop on ‘Linked Data and Learning Analytics – #LALD’ at the LAK12.
LALD is a very visionary workshop that assumes that linked datasets will become increasingly important for data driven research. We envision that in a close future research will take advantage of kind of configuration files that create linked datasets that can be used for data driven research. At the moment Learning Analytics and data research lack publicly available datasets to test and compare their findings. The main objective of the 1st International Workshop on Learning Analytics and Linked Data (#LALD2012) is to connect the research efforts on Linked Data and Learning Analytics in order to create visionary ideas and foster synergies between the two young research fields.
Below you can find the slides we used dunking the workshop.
Dragan’s slides on semantic web are here: [here]
Representation of the data is critical to sense making: [here]
Learning analytics and guidelines for ethical use: [here]
The connectivist pedagogy, a concept that addressed the potential of Learning Analytics and linked data:
Anderson and Dron 2011 [here]
Hi folks, here you can find the introduction slides for the LALD workshop 29th of April 2012 at the LAK12, Vancouver, Canada. Looking forward to it, as we made very good experiences with the PMI rating and the Grand Challenge task in previous dataTEL workshops.
Here you can find the agenda for the 1st International Workshop on Learning Analytics and Linked Data (#LALD) at the 2nd International Conference on Learning Analytics and Knowledge (LAK12), Vancouver, Canada
The workshop is co-orgnised by the EATEL SIG dataTEL and the LinkedEducation.org research initiatives. It is motivated by the multitude of datasets exists in TEL that offer new opportunities for teaching and learning. The available datasets can be roughly distinguished between (a) Open Web Data to (b) Personal Learning data originating from different learning environments.
Open Web data covers educational data publicly available on the Web, such as Linked Open Data (LOD) published by institutions about their courses and other resources; examples include (but are not limited to), e.g., The Open University (UK), the National Research Council (CNR, Italy), Southampton University (UK) or the mEducator Linked Educational Resources. It also includes the emergence of LD-based metadata schemas and TEL-related datasets. The main driver in the adoption of the LOD approach in the educational domain is the enrichment of the learning content and the learning experience by making use of various connected data sources.
Personal Learning data from different learning environments originate from tracking learners’ interactions with different tools and resources. The main driver for analyzing these data is the vision of personalized learning that offers potential to create more effective learning experiences through new possibilities for the prediction of and reflection over the learning processes.
The main objective of the LALD workshop is to connect the research efforts on LinkedData and Learning Analytics to create visionary ideas about how the synergy of Web of Data and Learning Analytics can transform and support TEL processes and applications. Therefore, the workshop will explore, collect and review datasets for TEL to discuss Learning Analytics approaches which make use of the Web of Data. During the workshop, an overview of available educational datasets will be given. The participants will have the opportunity to present own datasets or dataset descriptions, show their own data products and tools, and workout Grand Challenges that need to be overcome to collect, use and share educational datasets and their products.
In the third delivery of our survey series on Learning Analytics we focus on the results of the survey around the subdomains “Educational Data” and “Applied methods and Technologies” of the Learning Analytics framework.
The section on data investigated the parameters for sharing datasets in and across institutions. The potential of shareable educational datasets as benchmarking tools for technology enhanced learning is explicitly addressed by the Special Interest Group (SIG) dataTEL of the European Association of Technology Enhanced Learning (EATEL). Sharing of learning analytics data is impeded by the lack of some standard features and attributes that allows the re-use and re-interpretation of data and their applied algorithms. For researchers, the most important feature was the availability of added context information (n=43, means 3.42) with a maximum value of 4 on the Likert scale. Perhaps, equally unsurprising was that for the manager group sharing within the institution (n=16, means 3.63) and anonymisation (n=19, means 3.53) were the most important values. Teachers, on the other hand, valued context (n=52, means 3.42) and meta-information (n=47 means 3.47) the most. At the other end of the spectrum, version control was the least important attribute across all constituencies (n=106, means 2.93). However, despite ‘version control of educational datasets’ was ranked the lowest, we still believe that this will play an important role in an educational data future. Version controlled datasets will offer additional insights into reflection and improvements through learning analytics by comparing older and newer datasets. Graph 6 illustrates the importance of the given data attributes. Note that the notion of “important” outweighs the “highly important” overall, which results in a lower means value.
To get an idea of existing educational data, we asked participants about their institutional IT systems. For learning analytics, the landscape of data systems will play an important part in information sharing and comparison between institutions.
In the tertiary education sector alone (Further and Higher Education), 93.9% (n=92) reported an institutional learning management system, which made this the most popular data platform by far. This was followed by a student information system 62.2% (n=61) and the use of third-party services such as Google Docs or Facebook 53.1% (n=52). Table 2 below shows a summary inventory of institutional systems in use across all sectors of education covered in our demographics.
We assume that the more widely available a type of system is, the more potential it would hold for inter-institutional sharing of data, which could be utilised for comparison of educational practices or success factors. However, such sharing would depend on the willingness of institutions to share educational datasets with each other. When asked this question, a majority of people (86.6%, n=71) were happy to share data when anonymised according to standard principles.
What is slightly contradictory is that people who indicated before that anonymisation was not an important attribute for data are less inclined to share (n=18, 83.3% yes : 16.7% no) than people who felt that it was highly important (n=40, 92.5% yes : 7.5% no).
Methods and Technologies
Learning analytics is based on algorithms (formulas), methods, and theories that translate data into meaningful information. Because these methods involve bias , the questionnaire investigated the trust people put into a quantitative analysis and in accurate and appropriate results. Within the 100% rating range, where 100% would indicate total confidence and 0% no confidence at all, the responses were located at mid-range. Among the given choices, slightly higher trust was placed on the prediction of relevant learning resources. This may be due in analogy to the amazon.com recommendation model, which is well-known and widely trusted. Other recommendations, such as predictions on peers or performance were rated rather low. The percentage on the horizontal axis in graph 7 below shows the level of confidence.
One comment criticised that it was “disappointing that you included institutional markers, rather than personal ones for the learners, e.g. while learning outside the institution, which in my view are much more important and interesting”. We are not aware that the questions actually reflected an institution-centric perspective. At the same time, we still remain sceptical that analytics might currently be able to seamlessly capture learning in a distributed open environment, but mash-up personal learning environments are on the rise and may soon provide suitable opportunities for personal learning analytics.
In our next blog posting we will focus on the subdomains “Constraints (Privacy and Ethics)” and new “Competence” that are needed for Learning Analytics.
Mendeley is recruiting a Marie Curie Senior Research Fellow. Your primary responsibility will be to ensure that Mendeley’s research catalogue (i.e. collection of articles) is of high quality. Mendeley has crowdsourced the world’s largest research catalogue with over 50 million unique articles contributed by almost two million users over a period of four years. With your expert knowledge in data technologies and algorithms, you will take ownership of this catalogue, and work on innovative techniques for improving its quality. Your work should result in a cleaner, better structured and more scalable catalogue.
This position is part of the TEAM project (http://team-project.tugraz.at). You will spend 1 year in Mendeley’s London office before spending 1 year at TU Graz, the Knowledge Management Institute (http://kmi.tugraz.at/), Austria, collaborating with a top-class team. You will be passionate about working with large scale data collections and take pride in producing high quality data.
Ensure that the research catalogue is of high quality
Understand, maintain and help develop current crowdsourcing system
Disseminate results from your work both internally and externally
What you’ll be doing
Crowdsourcing a homogeneous catalogue from heterogeneous data sources, using modern data techniques
Identifying data sources, judging their appropriateness and working with data engineers to import them into the catalogue
Working with Data Engineers and Platform Team to make reliable/scalable systems
Working with Data Architect to ensure coherent data mapping, ontologies and schemas
Working with Mendeley’s Chief Scientist in contributing to solving data problems outside of the scope of catalogue crowdsourcing
Working 1 year from Mendeley’s London office, followed by 1 year in TU Graz before returning to London, with regular travel between both locations
What you should bring
PhD in the field of Computer Science or 4-10 years of full-time research (following first publication)
Expert knowledge of text and document processing, with strong machine learning background
Experience working with large-scale catalogues
Database integration experience
2+ years of Java programming; can independently prototype solutions to problems
Experience with big data technologies (e.g. Hadoop, MapReduce, NoSQL)
Unix skills, preferably Linux
Fluent spoken and written English
Strong presentation skills in communicating with experts and novices
What we offer
Salary of £50k per annum + stock options
No out-of-hours support expected
25 days holidays
Company benefits such as: cycle to work scheme, childcare vouchers, BUPA (private healthcare), Friday beer o’clocks (snacks and drinks on the house), free breakfast, monthly team night’s out, annual events (Christmas party and summer barbecue)
Working in a great environment in a central London office with roof terrace
Nationality: The researcher may be a national of a Member State of the Union, of an Associated Country or of any other third country
Mobility: At the time of selection, the researcher must not have resided or carried out his/her main activity in the country of the beneficiary home organisation for more than 12 months in the 3 years immediately prior to his/her selection under the project. International European interest organisations or international organisations.
The appointed researcher must not have spent more than 12 months in the 3 years immediately prior to the selection by the home organisation in the same appointing organisation.
If you are interested, send your CV and cover letter to jobs [at] mendeley [dot] com. If you are selected for an interview, we will let you know within two weeks.
You are currently browsing the geistlogistic blog archives for April, 2012.