LearnSphere's DataShop provides two main services to the learning science community.
  1. a central repository to secure and store research data
  2. a set of analysis and reporting tools

Researchers can rapidly access standard reports such as learning curves, as well as browse data using the interactive web application. To support other analyses, DataShop can export data to a tab-delimited format compatible with statistical software and other analysis packages.

Launch DataShop
Case Studies
Watch a video on how DataShop was used to discover a better knowledge component model of student learning. Read more ...
Systems with data in DataShop
Browse a list of applications and projects that have stored data in DataShop, and try out some of the tutors and games. Read more ...
Datashop News
Sunday June 2, 2024

DataShop 11.0 released!

A facelift for DataShop!

Even though we haven't kept up with the release announcements, there have been several releases over the years. This one, though, is special because it brings a much-needed facelift to DataShop. Same great content and tools but a shiny new look. Let us know what you think!

Posted by the DataShop Team at 3:00 PM
Saturday February 27, 2021

DataShop 10.7 released!

  1. We are excited to share that we have added a Dataset Search feature to DataShop.
  2. Users can now search for data based on a variety of dataset properties, associations, and metrics. The site-wide search is incorporated into the new search page for searching help pages, files, and papers. Below the site-wide search, the new Dataset Search utilizes over 20 attributes to filter datasets, such as the Shareable status, Area and Subject of study, Tutor types, as well as other qualitative data. Dataset Search also supports filtering by quantitative metrics such as the number of Transactions, Students, Steps, Knowledge Components, and others.

    Screenshot showing output for the new Dataset Search feature

    For example, in the figure above we see that there are 4 datasets that are Shareable, have a subject area of Algebra and use the CTAT tutor. Removing the Tutor filter would reveal there are 169 datasets that match the subject and shareability filters. The results include a link to each dataset as well as it's meta-data information.

    Access to the feature is available in the Explore section in the left-hand navigation or the search link in the upper right-hand corner. Clicking on a link in the filtered search results will include a "Back to Search" link on the page you navigate to, remembering your filter criteria.

  3. The terms "Domain" and "LearnLab" have been changed to reflect a wider scope of data sources for DataShop. They have been changed to "Area" and "Subject", respectively.
  4. We have added support for a Computer Science Area with the following list of Subjects:
    • Introductory Programming
    • Introductory Programming: Java
    • Introductory Programming: Python
    • Machine Learning and Data Science
    • Data Structures and Algorithms
    • Databases
    Please contact us if you have Computer Science data that you believe belongs in a different Subject; we can add others.
  5. Members of the LearnSphere community have added new workflow components.
  6. Source code for all Tigris components can be found in our GitHub repository. The first three components were developed as part of research efforts for the PLUS Personalized Learning2 project. Two of them perform analyses on student MAP (Measures of Academic Progress) data, specifically RIT scores.

    • Student Growth
    • This analysis component computes student MAP RIT score trends, over two years, for minoritized and non-minoritized students. RIT scores are an estimation of a student's instructional level. MAP tests are used by schools to measure student progress or growth year-to-year.

      Screenshot showing output for the Student Growth component
    • Learning Rate
    • This component uses AFM (Additive Factors Modeling) to model student growth (or Learning Rate) using two years of MAP test data, considering those students who received interventions vs those that did not.

      Screenshot showing output for the Learning Rate component
    • Student Demographics
    • This visualization component generates four student demographic plots, displaying the distribution of students by race, gender, school and grade.

      Screenshots showing output for the Student Demographics component
    • Multiskill Converter
    • This transform component can be used to convert a multi-skill Knowledge Component (KC) model in a DataShop student-step export. The component supports either creating a new skill by concatenating two or more skills or creating multiple distinct rows (and opportunity counts) from a single multi-skill row. This allows researchers to consider alternative ways in which skills present in a dataset can be used to better model student behavior.

Posted by Cindy at 4:00 PM

Monday, December 9, 2019

LearnSphere 2.4 released!

The focus of this release is largely back-end improvements to Tigris, the online workflow authoring tool which is part of LearnSphere. We have also added several new Tigris workflow components, many in response to feedback we have received at workshops and conferences.

  1. Support for arrays in component options.
  2. Tigris now supports array types for simple data types (double, integer, string), FileInputHeaders, and enumerated types (drop-down lists). The user can define default values for each value added, as well as the minimum and maximum number of allowed values. Array types are especially useful in cases where an unknown number of arguments is desirable. In this example, a variable 'weights' is defined as an array of xs:double values, with at least two required values.

  3. New and updated workflow components.
  4. Source code for all Tigris components can be found in our GitHub repository.

    • RISE
    • This R component helps to conduct RISE analyses as described in the paper The RISE Framework: Using Learning Analytics to Automatically Identify Open Educational Resources for Continuous Improvement..

    • OLI-to-RISE
    • For OLI datasets in DataShop, this component will generate an output file suitable for use in the RISE component. The output columns are skill name, average high-stakes error rate and average low-stakes error rate.

      Screenshot showing output for both 'OLI-to-RISE' and 'RISE' components
    • LFA Search
    • Learning Factors Analysis (LFA) performs a heuristic search to generate models of the input data. AFM (Additive Factors Model) is used to compute the metric (AIC or BIC) by which the models are compared for best fit.

    • Student Progress Classification
    • This component, developed as part of the PL2 project, classifies a student's progress based on their EdTech usage, accuracy information and specified goal.

    • Performance Difference Analysis
    • This component calculates performance differences for an individual course. The target use case is calculating gendered differences (or other factors) in final grade for a target course (e.g., intro physics), especially relative to grades in other courses taken by students in the target course. For reference: Koester, Grom, & McKay (2016) and Matz et al. (2017).

    • Curriculum Pacing
    • Curriculum Pacing is a way to visualize student learning trajectories through curriculum units and sections. This visualization is suitable to quickly explore end-to-end curriculum data and see patterns of student learning. More information can be found in this paper.

    • Multi-selection Converter
    • Given a DataShop transaction export, this component converts a multi-selection row into multiple rows of single steps and sets the Outcome values accordingly. Using a multi-selection item mapping file, an output transaction export is generated with each row labeled with the appropriate Event Type.

    • Tetrad Simulation
    • This component supports the Tetrad Simulation functionality that can be used to generate data from an instantiated model. This component allows Tigris to support the Tetrad functionality mentioned in this tutorial by Richard Scheines.

    • Wheel-spinning Detector
    • This component implements the algorithm given in Joe Beck's paper to detect if a student is wheel-spinning. The required input format is a DataShop student-step export.

    • Tetrad Graph Editor
    • The editor was extended to allow users to create graphs from scratch, generating named nodes and links in the options panel. The created graph is available for download.

    • Tetrad Regression
    • A third output was added that includes an r2 value (a measure of how closely the data fits the regression line) and the sum of squared estimate of errors (SSE).

  5. DataShop now includes support for non-instructional steps.
  6. The dataset import procedure will now process a new column called Event Type in the tab-delimited transaction data file, if present. The allowed values for this column are: assess, instruct and assess_instruct. The assess event type is considered to be a non-instructional step; the other event types cause the learning opportunity count to be incremented by 1. Using non-instructional steps, you can grant partial credit to multi-selection questions. For example, in the following question the correct answer is A and B. When a student gives an answer of B and C, they should receive partial credit for selecting B.

Posted by Cindy at 10:00 AM

Wednesday, 5 December 2019

Attention! DataShop downtime for release of v10.5

DataShop is going to be down for 2-4 hours beginning at 8:00am EST on Monday, December 9, 2019 while our servers are being updated with the new release.

Posted by Cindy at 3:40 PM

Friday, April 5, 2019

LearnSphere 2.3 released!

This release features several usability improvements to Tigris, the online workflow authoring tool which is part of LearnSphere. We appreciate the feedback we have received at workshops and conferences and have addressed many of your comments in this release.

  1. Component warning message.
  2. We've added support for displaying any warning messages generated by components. Components with warnings are still able to complete, yet they now indicate that something is amiss. A new output stream, the "warning" stream, is recognized by the system. If authoring a new component, warnings can be added simply by logging messages with the prefix "WARN:" or "WARNING:" to the WorkflowComponent.log or *.wfl log files. See our online documentation for more details

  3. Attach your papers to your workflows.
  4. Allow other Tigris users to easily access work you have published referencing your workflows by attaching papers to workflows. They may be provided as URLs or uploaded directly to Tigris. An indicator shows which workflows have papers attached. Papers can be attached via the "Link" button in the workflow editor. Once added, papers may be linked to any number of workflows.

  5. Copy, paste and move multiple components.
  6. This feature can allow users to easily create a workflow to run over several different input files (or DataShop datasets). Copy and paste one or more components to extend a workflow or select groups of components to rearrange them, as a group, in your workspace. You can select components with left-click and drag to create a selection rectangle, or select components by clicking with the ctrl or ⌘ (command) keys. Data and options are retained during a copy/paste operation.

  7. Links included in annotations and descriptions are now clickable.
  8. Import component improvements.
  9. Attaching an imported file to a DataShop dataset allows that file to be used in other workflows. With this release, new DataShop projects and datasets can be created directly from the Import component.

    We've added a progress bar for uploading larger files.

  10. New and updated workflow components.
    • Transaction Aggregator. This component generates a student-step rollup from a tab-delimited transaction file. The student-step rollup is the required input for several of the Analysis components, e.g., AFM and BKT.
    • Tetrad Search. This component now implements the FASK search algorithm. Also, the Tetrad components now use the latest version of the Tetrad libraries (v6.5.4).
    • Assessment Analysis. This component is an extension of the Cronbach's Alpha component, adding columns for: (1) each item's percent correct, (2) the standard deviation of the correctness and (3) the correlation of each item to the final score.
    • Learning Curve Visualization. The component now supports categorizing learning curves 'By Student' as well as a composite curves option, e.g., 'All Students' and 'All KCs'.
Posted by Cindy at 10:00 AM

Tuesday, October 30 2018

LearnSphere 2.1 released!

This release features improvements to Tigris, the online workflow authoring tool which is part of LearnSphere.

  1. Redesign of workflow list to allow for better navigation.
  2. Returning users will recognize that the layout of the main workflows page has changed to allow for better navigation, both of their own and other's workflows. Users can now create folders in which to organize their workflows, choosing to group them by analysis methods, course or data type, for example.

  3. Advanced search of workflows.
  4. Also new to the main workflows page is an Advanced Search toolbox. The list of workflows can be filtered by owner, component, date range and access level for the data included in the workflow. The component search covers not only the component names and type (e.g., Analysis, Transform) but also workflow description, results and folders.

  5. New visualization components.
  6. Four new visualization components have been added: Scatter Plot, Bar Chart, Line Graph and Histogram. Each requires a tab-delimited file as input. The file must have column headers, but can contain numeric, date, or categorical data. In the example below, a DataShop student-step export is used with the Scatter Plot component to visualize the number of times a student has seen a problem vs. the amount of time it took to complete a step.

    These visualization components produce dynamic content, allowing users to change both the variables that are being visualized in the output, as well as the look-and-feel of the graph, without having to re-run the entire workflow. The visualization can be downloaded as a PNG image file.

  7. Private options.
  8. Component developers may wish to have options that contain sensitive data, e.g., login information or keys. Examples of this are the ImportXAPI and Anonymize components. For instance, in the figure below you can see that the ImportXAPI component requires a URL for the data as well as the user id and password required to access the data. With support for private options, the component author can ensure that sensitive fields such as these are visible only to the workflow owner, while other options default to 'public' and are visible to all users.

  9. WebServices for LearnSphere.
  10. With this release you can use Web Services to programmatically retrieve LearnSphere data, including lists of workflows, as well as attributes and results for specific workflows. In the next release this functionality will be extended to allow users to create, modify and run workflows programmatically as well.

  11. Ability to link a workflow to one or more datasets.
  12. Creating a workflow from a DataShop dataset will establish a relationship between the dataset and workflow. However users may wish to link multiple datasets to a workflow and they may want to do this only after creating the workflow. There is a new "Link" icon on each workflow page that opens a dialog listing datasets that can be referenced; this is available to the workflow owner. Users viewing the workflow can click on the "datasets" link below the workflow name to see which datasets have been linked to the workflow.

  13. Unique URL for each workflow gives ability to link directly to a workflow.
  14. To facilitate easy sharing of workflows, each workflow now has unique URL. The URL can also be bookmarked.

  15. In addition to the above changes, there are several component improvements.
    1. The R-based iAFM and Linear Modeling components have been optimized, resulting in marked performance improvements.
    2. Learning Curve categorization. A new feature, enabled by default, categorizes the learning curves (graphs of error rate over time for different KCs, or skills) generated by this visualization component into one of four categories. This can help to identify areas for improvement in the KC model or student instruction. Learn more about the categories here.
    3. The OutputComparator now allows for multiple matching conditions.
    4. There is a new component -- OLI LO to KC -- that can be used to map learning objectives for an OLI course into DataShop KC models. The component builds a KC model import for a given OLI skill model.
    5. The output format for the PythonAFM component is now XML, making it easier to use in the OutputComparator.
    6. The TextConverter component has been extended so that XML, tab-delimited and comma-separated (CSV) inputs can be converted to either tab-delimited or CSV outputs.
Posted by Cindy at 10:00 AM

Friday, 13 July 2018

LearnSphere 2.0 and DataShop 10.2 released!

This release features more improvements to Tigris, an online workflow authoring tool which is part of LearnSphere. These improvements make it possible for users to better contribute data, analytics and explanations of their workflows.

  1. Tree structure for the list of components
  2. The Tigris components, still in the left-hand pane, are now displayed in a tree structure. The categories and the organization of the components is the same, and in addition, each category has a "Recently Used" folder. This makes it easier to find the components you most frequently use. We've also added the ability to search for components. The list will update to only show components relevant to the search term. Users can search components by name or any relevant information, e.g., the component author or the input file type.


  3. Single Import
  4. We consolidated Import functionality into a single component. Now, choosing a data file is simpler because the file type heirarchy is built into the import process. In the new Import options panel, you'll be prompted to specify your file type, which will then filter the list of available, relevant DataShop files. Alternatively, you can upload your own data file in the other tab, as shown below.

  5. Multiple files per input node
  6. Component input nodes are no longer restricted to a single file. This means that components which analyze data across files are no longer limited by their number of input nodes. For example, the OutputComparator component, which allows for visual side-by-side comparison of variables across tab-delimited, XML or Property files, now supports an unlimited number of input files.



  7. Multiple predicted error rate curves on a single Learning Curve graph
  8. The Learning Curve component now allows users to plot multiple predicted error rate curves on a single graph. In the first example we show the predicted error rates for four different KC models, generated using the Analysis AFM component. Because this component allows for multiple input files, we can also use this component to show the predicted error rate curves across different analyses, in this case, AFM and BKT (shown in the second figure).


  9. There are several new components available, including:
    • DataStage Aggregator (Transform)
    • Anonymize (Transform)
    • AnalysisFTest5X2* (Analysis)
    • AnalysisStudentClustering* (Analysis)
    • CopyCovariate* (Transform)
    • ImportXAPI* (Transform)

    The DataStage Aggregator component aggregates student transaction data from DataStage, the Stanford dataset repository from online courses.

    The Anonymize component allows users to securely anonymize a column in an input CSV file. The generated output will be the original input data with the specified column populated with anonymized values. The anonymous values are generated using a salt (or "key" value). This component is useful when anonymizing students across multiple files, consistently.

    * These components are available at LearnSphere@Memphis.

    Posted by Cindy at 2:00 PM

Monday, 9 July 2018

Attention! DataShop downtime for release of v10.2

DataShop is going to be down for 2-4 hours beginning at 8:00am EST on Friday, July 13, 2018 while our servers \ are being updated with the new release.

Posted by Cindy at 9:35 AM

Friday, 23 March 2018

LearnSphere 1.9 and DataShop 10.1 released!

This release features improvements to Tigris, an online workflow authoring tool which is part of LearnSphere. These improvements make it possible for users to better contribute data, analytics and explanations of their workflows.

  1. Workflow Component Creator
  2. We invite users to contribute their analysis tools and have written a script that can be used to create workflow components. The script can be found in our GitHub repository, along with the source code for all existing components. The component documentation includes a section about running the script. In addition, other changes make it easier to author new components: there is built-in support for processing zip files, component name and type restrictions have been relaxed, and arguments can easily be passed to custom scripts.

  3. Returning Tigris users will find many usability and performance enhancements have been made.
    • We have improved processing throughput, as well as the security of Tigris workflows, by moving to a distributed architecture and off-loading component-based tasks to Amazon's Elastic Container Service.
    • Tigris and DataShop now support a LinkedIn login option.
    • Users can now annotate their workflows with additional information about the workflow -- the data being used, the flow itself and the results. The owner of a workflow can add sticky notes to the workflow and these annotations are available to other users viewing the workflow.

  4. New Analytic component functions
    • The Tetrad Graph Editor component is now a graphical interface, replacing the text-based graph definition. Users can now visually define the graph.
    • The Tetrad Knowledge component now has a drag-and-drop interface, allowing users to place variables in specific tiers.
    • Component options with a large number of selections will now open a dialog that supports multi-select and double-click behaviors.
  5. There are four new components available:
    • Apply Detector (Analysis)
    • Output Comparator (Analysis)
    • Text Converter (Transform)
    • Row Remover (Transform)

    More information about the detectors available for use in the Apply Detector component, and papers that have been published about them, can be found here. The detectors can be used to compute particular student model variables. They are computational processes -- oftentimes machine-learned -- that track psychological and behavioral states of learners based on the transaction stream with the ITS.

    The Output Comparator provides a visual comparison of up to four input files. Supported input formats are: XML, tab-delimited and (key, value)-pair properties files. The output is a tabular display allowing for a side-by-side comparison of specified variables.

    The Text Converter is used to convert XML files to a tab-delimited format, a common format for many of the analysis components in Tigris. The Row Remover allows researchers to transform a tab-delimited data source to meet certain criteria for a dataset. For example, the user can configure the component to remove rows for which values in a particular column are NULL or fall outside the acceptable range.

    Posted by Cindy at 2:00 PM

Saturday, 16 March 2018

Attention! DataShop downtime for release of v10.1

DataShop is going to be down for 2-4 hours beginning at 8:00am EST on Friday, March 23, 2018 while our servers are being updated with the new release.

Posted by Cindy at 1:00 PM

Tuesday, 7 November 2017

DataShop 10.0 released!

With this release of DataShop we continue to extend the functionality of Tigris, the LearnSphere workflow tool, as well as enhance it's usability. There is now a 'Recommended Workflows' section at the top of the main Tigris page. This list of workflows contains those we feel best highlight the most useful features of the tool. Using the 'Save As' button, these workflows can be used as templates for users to create their own workflows. In addition, on the main page, there is a search feature that allows users to filter the workflows by name, owner or component.

A focus of this release has been adding support that facilitated the creation of many new components. For example, dynamic options are now supported. This provides component developers with option constraints that can trigger changes to the UI based on the user's selections. Dependencies can be combined in logical combinations to accomodate complex parameter sets.

The new Linear Modeling Analysis component uses this feature, allowing users to call the R functions lm, lmer, glm and glmer on a data file of their choice.

Similarly, the component definition language was extended to allow for optional inputs on components. These are common in components which generate data and also take an optional set of inputs or parameters. An example of this is the new Tetrad Graph Editor. Tetrad is a causal modeling tool that allows users to build models, simulate data from those models (or use them on real data), apply algorithms to the models and graphically display the causal relationships found.

Many features of Tetrad are now supported as Tigris workflow components, making it easier for researchers to do multiple analyses on datasets that may include data from both DataShop and external sources. For example, the following Tetrad support is now available in Tigris:

  • Data Conversion
  • Classifier
  • Estimator
  • Search
  • Knowledge
  • Graph Editor

Following is an example workflow with several of these components. A tab-delimited data file is transformed both to filter missing values and then discretize those values before passing the data to the Search component which searches for causal explanations represented by directed graphs.

Also, two new Analysis components have been added by colleagues at LearnSphere@Memphis. They facilitate analyses of a wider variety of learning sciences data. The new modeling components are TKT (Temporal Knowledge Tracing) and LSA (Latent Semantic Analysis).

Source code for all of the LearnSphere components can be found in our GitHub repository. If you would like to add your analysis, import, transform or visualization component(s) to Tigris, please contact us for information on how to get started.

The last release added 'Request Access' support to workflows, allowing users to request access to data and results in public workflows with shareable data, but it required that all of the data used in the workflow be shareable. Workflows often use multiple data sources, though, so authorization is now enforced per-component. This means that workflows which include both private and shared data can be partially accessed by users. Results and data that are inaccessible show up as 'Locked' components.

In addition to the above Tigris improvements, the following features were added to DataShop:

  • The Learning Curve Model Values page now includes the 'Number of Unique Steps' and 'Number of Observations' for each skill (KC) in the selected Knowledge Component model.
  • The Web Services API was extended to allow users to query and modify project authorization values.
  • Tigris and DataShop both now support a GitHub login option.

Posted by Cindy at 2:00 PM

Wednesday, 1 November 2017

Attention! DataShop downtime for release of v10.0

DataShop is going to be down for 2-4 hours beginning at 8:00am EST on Tuesday, November 7, 2017 while our servers are being updated with the new release.

Posted by Cindy at 3:00 PM

Tuesday, 27 June 2017

DataShop 9.4 released - several Tigris enhancements and bug fixes

The latest release of DataShop includes several enhancements and bug fixes for Tigris, the LearnSphere workflow tool.

Returning users will notice a new user interface for Tigris. We have changed the look-and-feel of the tool while making numerous styling improvements and fixing several bugs.

The biggest change is the addition of Request Access support to Tigris. DataShop users are familiar with the feature that allows users to request access to projects with private, shareable datasets. This feature has been extended to Tigris; users can request access to data directly from the Workflows page.

Workflows that use private, shareable data will have a REQUEST button with which users can ask for access to the workflow data. Without access to the data, users are able to view the Public workflows as a template only. Regardless of data access, users can make a copy of any Public workflow for use with their own data.

Most of the Tigris components now have tooltips which contain information on what the component does, the required input(s) and the output(s) generated as well as the component options.

The implementation code for each of the Tigris workflow components is publicly available in GitHub. If you would like to add your analysis, import, transform or visualization component to Tigris, please contact us for information on how to get started.

In addition to these Tigris improvements, the maximum allowed size for Custom Field values has been increased -- the new limit will now allow for values up to 16M in size -- and a bug in the renaming of Knowledge Component (KC) models has been fixed.

Posted by Cindy at 5:00 PM

Sunday, 25 June 2017

Attention! DataShop downtime for release of v9.4

DataShop is going to be down for 2-4 hours beginning at 9:00am EST on Monday, June 26, 2017 while our server\ s are being updated with the new release.

Posted by Cindy at 5:00 PM

Thursday, 16 February 2017

DataShop 9.3 released - Beta version of Tigris

The latest release of DataShop includes a Beta version of the workflow tool, now referred to as Tigris.

In order to facilitate the sharing of analyses, Tigris users can view global workflows created by other users. If the data included in a workflow is public or is attached to a dataset that the user has access to, then the workflow imports, component options and results are all accessible. If the user does not have access to the dataset they can view the workflow as a template. In both cases, users can create a copy of the workflow for use with their own data.

As part of the LearnSphere project, contributors from CMU, the University of Memphis, MIT and Stanford have been building workflow components, many of which are already available in Tigris. The latest code, with descriptions of each component, can be found in GitHub. If you would like to add your analysis, import, or visualization component to Tigris, please contact us for information on how to get started.

In addition to Tigris improvements, we have added a few enhancements and fixed several bugs:

  • Users can now make a web service call to retrieve the list of data points that make up any given Learning Curve graph. The list of points can be generated for a particular skill in a specific skill model (KC Model). More about this feature can be found in the updated DataShop Public API doc.
  • For OLI datasets, the Learning Curve graphs were extended to include a "high stakes error rate" data point. If you have an OLI dataset for which you'd like to see this analysis, please contact us as we will need to reaggregate your dataset to generate the necessary information.

  • Long Input values were being truncated in the Error Report. This issue has been addressed.
  • The Problem List page was failing to load for datasets with a very large number of problems per hierarchy, or dataset level. This has been fixed.
Posted by Cindy at 10:00 AM

Friday, 10 February 2017

Attention! DataShop downtime for release of v9.3

DataShop is going to be down for 2-4 hours beginning at 8:00am EST on Thursday, February 16, 2017 while our server\ s are being updated with the new release.

Posted by Cindy at 03:00 PM

Friday, 14 October 2016

DataShop 9.2 released - Alpha version of Workflow tool

The latest release of DataShop introduces an analytic workflow authoring tool. The alpha version of this tool allows users to build and run component-based process models to analyze, manipulate and visualize data.

The workflow authoring tool is part of the community software infrastructure being built under the umbrella of the LearnSphere project, with partners at Stanford, MIT and the University of Memphis. The primary data flow in a workflow is a table so users are not restricted to DataShop data. The platform will provide a way to create custom analyses and interact with proprietary data formats and repositories, such as MOOCdb, DiscourseDB and DataStage.

Users can request early access to the Workflow tool using the "Workflows" link in the left-hand navigation. Once granted access, this becomes a link to the "Manage Workflows" page, which is also available as a main tab on each dataset page.

Workflows are created by dragging and dropping components into the tool and making connections between components. Options can be configured for each component by clicking on the gear icon . For example, in the import component, users can upload a file or choose from a list of dataset files for which they have access.

Once a workflow has been run, clicking on any component's magnifying glass icon or the primary "Results" button will display the output of each component. A preview of the results is also available as a mouse-over on the component output nodes.

In the near future, we will invite users to contribute their own components to the Workflow tool. This feature will allow researchers to share analysis tools, for application to other datasets.

In addition to the Workflow tool, we have added a few enhancements and fixed several bugs:

  • The Metrics Report now includes an "Unspecified" category for datasets without a Domain or LearnLab configured. In previous releases these datasets were not reflected in the report, causing the amount of data shown to be less than the actual data.
  • KC Model exports are now being cached, allowing for faster exports of models in the same dataset.
  • Users running their own DataShop instances will find that Research Goals now include links to recommended datasets and papers on the master server, DataShop@CMU.
  • For Dataset Uploads, two restrictions on the upload format have been relaxed. See the Tab-delimited Format Help for details.
    • If a Step Name is specified, the Selection-Action-Input is no longer required.
    • Previously, if both the Problem View (PV) and Problem Start Time (PST) were specified, then the PV was recomputed based on the PST. With this release, if the two values do not agree, the PV in the upload is used.
  • Users are now required to select a Domain/LearnLab designation during dataset upload.
Posted by Cindy at 10:00 AM

Tuesday, 26 April 2016

DataShop 9.1 released

In the spirit of collaboration, this release focuses on integration with our LearnSphere partners, with the long-term goal of creating a community software infrastructure that supports sharing, analysis and collaboration across a wide variety of educational data. Building on DataShop and efforts by partners Stanford, MIT and the University of Memphis, LearnSphere will not only maintain a central store of metadata about what datasets exist, but also have distributed features allowing contributors to control access to their own data. The primary features in support of this collaboration are:

  • DataShop now supports both Google and InCommon Federation single sign-on (SSO) options. SSO allows users to access DataShop with the same account they're already using elsewhere, e.g., your university or institution in the case of the InCommon login.

    If you currently use the local login option, please contact us about migrating your account to one of the SSO options.

  • Users can now upload a DiscourseDB discourse to DataShop. With support for DiscourseDB, users can view meta-data for discourses and, with appropriate access, download the database import file (MySQL dump).

  • We have developed a DataShop virtual machine instance (VMI) which allows users to configure their own slave DataShop instance. The remote (slave) instance is a fully-functioning DataShop instance that runs on your server, allowing you to maintain full control over your data, while having your dataset meta-data synced with the production, or master, DataShop instance. If you are interested in having your site host a remote DataShop instance, please contact us.

In addition to the headlining features, this release also adds the following support:

  • Users can now create a sample of their dataset by filtering on Custom Fields. Sampling by the name and/or the value of the custom field is supported. This allows users to create subsets of datasets based on particular values assigned to each transaction by the tutor. For example, a step can be categorized as being high- or low-stakes for the student and the tutor can mark the relevant transactions with this information allowing those analyzing the data to filter on this information.

  • The Additive Factors Model (AFM) is no longer limited explicitly by the number of skills in a skill model. Previously, AFM would not be run if there more than 300 skills in a model. Now, the number of students and the size of the step roll-up, as well as the number of skills, factor into the decision.
  • The file size limit for dataset and file uploads was increased from 200MB to 400MB.
  • The number of KC Models in a dataset is now part of the dataset summary on the project page.

Bug fixes

  • Alignment errors were fixed in the KC Model Export for the case of multiple models with multiple skills.
  • Clearing the Project on the Dataset Info page no longer results in a error.
  • The Error Report now correctly displays HTML/XML inputs in the Answer and Feedback/Classification columns. Similarly, display errors resulting from inputs that contain mark-up, were fixed in the Exports.
Posted by Cindy at 10:00 AM

Friday, 4 September 2015

DataShop 9.0 released

With the latest release of DataShop, our focus was on fixing bugs and enhancing a few existing features.

  • Users can now quickly navigate from problem-specific information in a Learning Curve or Performance Profiler report directly to that problem in the Error Report; an "Error Report" button has been added to the tooltips. The Error Report includes information on the actual values students entered and the feedback received when working on the problem.

  • In the Performance Profiler, if a secondary KC model is selected, the skills from the secondary model that are present in the problem are included in the problem info tooltip.

  • If the Additive Factors Model (AFM) or Cross Validation (CV) algorithms fail or cannot be run, the reason is now available to the user as a tooltip. The tooltip is present when hovering over the status in the KC Models table. If you have a follow-up questions, remember that you can always send email to datashop-help.

  • Users can now sort the skills in a particular KC model to indicate learning difficulty. By sorting the KC model skills by intercept and then tagging those for which the slope is below some threshold, users can easily identify skills that may be misspecified and should be split into multiple skills. See the DataShop Tutorial videos on how to change the skills and test the result of that change. This sorting feature is available on the "Model Values" tab of the Learning Curve page.

  • The Cross Validation calculation was modified to provide more statistically valid results. The new calculation computes an average over 20 runs in determining the root mean squared error (RMSE).

Bug fixes

  • The Student-Step Export was updated to print only a single predicted-error-rate value for steps with multiple skills, as the values are always the same.
  • The Help pages for the Additive Factors Modeling (AFM) have been updated to indicate that DataShop implements a compensatory sum across all Knowledge Components when there are multiple KCs assigned to a single step.
  • The KC Model Import was fixed to ensure that invalid characters cannot be used in the model name not only during initial model import, but also in the dialog box that comes up when a duplicate name is detected.
Posted by Cindy at 10:00 AM