Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
http://www.bmu.psychiatry.cam.ac.uk/software/
Suite of programs developed for fMRI analysis in a Virtual Pipeline Laboratory facilitates combining program modules from different software packages into processing pipelines to create analysis solutions which are not possible with a single software package alone. Current pipelines include fMRI analysis, statistical testing based on randomization methods and fractal spectral analysis. Pipelines are continually being added. The software is mostly written in C. This fMRI analysis package supports batch processing and comprises the following general functions at the first level of individual image analysis: movement correction (interpolation and regression), time series modeling, data resampling in the wavelet domain, hypothesis testing at voxel and cluster levels. Additionally, there is code for second level analysis - group and factorial or ANOVA mapping - after co-registration of voxel statistic maps from individual images in a standard space. The main point of difference from other fMRI analysis packages is the emphasis throughout on the use of data resampling (permutation or randomization) as a basis for inference on individual, group and factorial test statistics at voxel and cluster levels of resolution.
Proper citation: Cambridge Brain Activation (RRID:SCR_007109) Copy
KNIME (Konstanz Information Miner) is a user-friendly and comprehensive Open-Source data integration, processing, analysis, and exploration platform. KNIME (naim) is a user-friendly graphical workbench for the entire analysis process: data access, data transformation, initial investigation, powerful predictive analytics, visualization and reporting. The open integration platform provides over 1000 modules (nodes), including those of the KNIME community and its extensive partner network. KNIME can be downloaded onto the desktop and used free of charge. KNIME products include additional functionalities such as shared repositories, authentication, remote execution, scheduling, SOA integration and a web user interface as well as world-class support. Robust big data extensions are available for distributed frameworks such as Hadoop. KNIME is used by over 3000 organizations in more than 60 countries. The modular data exploration platform, initially developed at the University of Konstanz, Germany, enables the user to visually create data flows, execute selected analysis steps, and later investigate the results through interactive views on data and models. KNIME is a proven integration platform for tools of numerous vendors due to its open and modular API. The KNIME.com product pipeline includes an Enterprise Server, Cluster Execution, Reporting solutions, and professional KNIME support subscriptions. KNIME.com also offer services such as data analysis, hands-on training and the development of customized components for KNIME.
Proper citation: Knime (RRID:SCR_006164) Copy
http://www.nitrc.org/projects/abc
A comprehensive processing pipeline developed and used at University of North Carolina and University of Utah for brain MRIs. The processing pipeline includes image registration, filtering, segmentation and inhomogeneity correction. The tool is cross-platform and can be run within 3D Slicer or as a stand-alone program. The image segmentation algorithm is based on the EMS software developed by Koen van Leemput.
Proper citation: ABC (Atlas Based Classification) (RRID:SCR_005981) Copy
http://cbcb.umd.edu/software/metAMOS
A modular and open source metagenomic assembly and analysis pipeline.
Proper citation: MetAMOS (RRID:SCR_011914) Copy
Kepler is a software application for analyzing and modeling scientific data. Using Kepler''s graphical interface and components, scientists with little background in computer science can create executable models, called scientific workflows, for flexibly accessing scientific data (streaming sensor data, medical and satellite images, simulation output, observational data, etc.) and executing complex analyses on this data. Kepler is developed by a cross-project collaboration led by the Kepler/CORE team. The software builds upon the mature Ptolemy II framework, developed at the University of California, Berkeley. Ptolemy II is a software framework designed for modeling, design, and simulation of concurrent, real-time, embedded systems. The Kepler Project is dedicated to furthering and supporting the capabilities, use, and awareness of the free and open source, scientific workflow application, Kepler. Kepler is designed to help scien��tists, analysts, and computer programmers create, execute, and share models and analyses across a broad range of scientific and engineering disciplines. Kepler can operate on data stored in a variety of formats, locally and over the internet, and is an effective environment for integrating disparate software components, such as merging R scripts with compiled C code, or facilitating remote, distributed execution of models. Using Kepler''s graphical user interface, users simply select and then connect pertinent analytical components and data sources to create a scientific workflowan executable representation of the steps required to generate results. The Kepler software helps users share and reuse data, workflows, and compo��nents developed by the scientific community to address common needs. Kepler is a java-based application that is maintained for the Windows, OSX, and Linux operating systems. The Kepler Project supports the official code-base for Kepler development, as well as provides materials and mechanisms for learning how to use Kepler, sharing experiences with other workflow developers, reporting bugs, suggesting enhancements, etc. The Kepler Project Leadership Team works to assure the long-term technical and financial viability of Kepler by making strategic decisions on behalf of the Kepler user community, as well as providing an official and durable point-of-contact to articulate and represent the interests of the Kepler Project and the Kepler software application. Details about how to get more involved with the Kepler Project can be found in the developer section of this website.
Proper citation: Kepler (RRID:SCR_005252) Copy
An online toolbox and workflow management system for a broad range of bioinformatic and systems biology applications. The individual modules, or Bricks, are unified under a standardized interface, with a consistent look-and-feel and can flexibly be put together to comprehensive workflows. The workflow management is intuitively handled through a simple drag-and-drop system. With this system, you can edit the predefined workflows or compose your own workflows from scratch. Your own Bricks can easily be added as scripts or plug-ins and can be used in combination with pre-existing analyses. GeneXplain GmbH provides a number of state-of-the-art bricks; some of them can be obtained free of charge, while others require licensing for small fee in order to guarantee active maintenance and dynamic adaptation to the rapidly developing know-how in this field.
Proper citation: geneXplain (RRID:SCR_005573) Copy
An open source and domain independent Workflow Management System ����?? a suite of tools used to design and execute scientific workflows and aid in silico experimentation. Taverna Workbench now has support for service sets, offline workflow editing, workflow validation, improved workflow run monitoring, and the pausing and canceling of workflow runs. The command line tool allows you to run workflows outside of the workbench and is available as a stand-alone download or bundled with the Taverna Workbench 2.2.0 download. The Taverna suite is written in Java and includes the Taverna Engine (used for enacting workflows) that powers both the Taverna Workbench (the desktop client application) and the Taverna Server (which allows remote execution of workflows). Taverna is also available as a Command Line Tool for a quick execution of workflows from a terminal. Taverna 2.2.0 includes * Copy/paste, shortcuts, undo/redo, drag and drop * Animated workflow diagram * Remembers added/removed services * Secure Web services support * Secure access to resources on the web * Up-to-date R support * Intermediate values during workflow runs * myExperiment integration * Excel and csv spreadsheet support * Command line tool
Proper citation: Taverna (RRID:SCR_004437) Copy
http://code.google.com/p/psom/
A lightweight software library to manage complex multi-stage data processing. A pipeline is a collection of jobs, i.e. Matlab or Octave codes with a well identified set of options that are using files for inputs and outputs. To use PSOM, the only requirement is to generate a description of a pipeline in the form of a simple Matlab / Octave structure. PSOM then automatically offers the following services: * Run jobs in parallel using multiple CPUs or within a distributed computing environment. * Generate log files and keep track of the pipeline execution. These logs are detailed enough to fully reproduce the analysis. * Handle job failures : successful completion of jobs is checked and failed jobs can be restarted. * Handle updates of the pipeline : change options or add jobs and let PSOM figure out what to reprocess !
Proper citation: Pipeline System for Octave and Matlab (RRID:SCR_009637) Copy
http://www.nitrc.org/projects/dbgapcleaner/
Tool to assist site staff with curation of data dictionary, data item, and subject item files for preparation to uploading and sharing data with DbGaP resource.
Proper citation: DbGaP Cleaner (RRID:SCR_009462) Copy
http://www.nitrc.org/projects/lwdp/
A lightweight framework for setting up dependency-driven processing pipelines. The tool is essentially a configurable shell script (sh/bash), which can be included in other scripts and primarily provides a small number of utility functions for dependency checking and NFS-safe file locking for cluster processing.
Proper citation: Lightweight Data Pipeline (RRID:SCR_014135) Copy
http://www.nitrc.org/projects/bvqxtools
A Matlab-based toolbox initially created for reading, writing, and processing of BrainVoyager (QX) files in Matlab.
Proper citation: NeuroElf (RRID:SCR_014147) Copy
http://www.nitrc.org/projects/aperture/
A MATLAB-based toolbox for analysis of EEG, MEG, and ECoG data. APERTURE allows flexible multivariate analysis of ERPs and oscillatory activity and supports mass-univariate analysis with advanced statistical tests. Computations are accelerated using parallel computing supported through the MATLAB distributed computing toolbox. Examination of large, high-dimensional datasets is made simple through data visualization tools, including advanced plotting routines and generation of PDF reports with many figures.
Proper citation: APERTURE (RRID:SCR_014082) Copy
http://www.perkinelmer.com/catalog/category/id/living%20image%20software
In vivo imaging software which facilitates workflow for in vivo optical, X-ray and microCT image acquisition, analysis and data organization.
Proper citation: Living Image software (RRID:SCR_014247) Copy
http://www.scied.com/pr_cmbas.htm
A software system to assist with cloning simulation, enzyme operations, and graphic map drawing. Clone Manager can also be used as a way to view or edit sequence files, find open reading frames, translate genes, or find genes or text in files. Clone Manager Professional is an upgraded version of Clone Manager Basic.
Proper citation: Clone Manager Software (RRID:SCR_014521) Copy
http://huttenhower.sph.harvard.edu/humann
A pipeline which takes short DNA/RNA reads as inputs and produces gene and pathway summaries as outputs. The pipeline converts sequence reads into coverage and abundance tables summarizing the gene families and pathways in one or more microbial communities.
Proper citation: HUMAnN (RRID:SCR_014620) Copy
http://code.google.com/p/panda-tool/
A matlab toolbox for pipeline processing of diffusion MRI images. For each subject, PANDA can provide outputs in 2 types: i) diffusion parameter data that is ready for statistical analysis; ii) brain anatomical networks constructed by using diffusion tractography. Particularly, there are 3 types of resultant diffusion parameter data: WM atlas-level, voxel-level and TBSS-level. The brain network generated by PANDA has various edge definitions, e.g. fiber number, length, or FA-weighted. The key advantages of PANDA are as follows: # fully-automatic processing from raw DICOM/NIFTI to final outputs; # Supporting both sequential and parallel computation. The parallel environment can be a single desktop with multiple-cores or a computing cluster with a SGE system; # A very friendly GUI (graphical user interface).
Proper citation: PANDA (RRID:SCR_002511) Copy
A configurable, open-source, Nipype-based, automated processing pipeline for resting state functional MRI (R-fMRI) data, for use by both novice and expert users. C-PAC was designed to bring the power, flexibility and elegance of the Nipype platform to users in a plug and play fashion?without requiring the ability to program. Using an easy to read, text-editable configuration file, C-PAC can rapidly orchestrate automated R-fMRI processing procedures, including: - quality assurance measurements - image preprocessing based upon user specified preferences - generation of functional connectivity maps (e.g., correlation analyses) - customizable extraction of time-series data - generation of local R-fMRI metrics (e.g., regional homogeneity, voxel-matched homotopic connectivity, fALFF/ALFF) C-PAC makes it possible to use a single configuration file to launch a factorial number of pipelines differing with respect to specific processing steps.
Proper citation: C-PAC (RRID:SCR_000862) Copy
https://github.com/aplbrain/saber
Library of containerized tools and workflow deployment system for enabling processing of large neuroimaging datasets. Provides canonical neuroimaging workflows specified in standard workflow language (CWL), integration with workflow execution engine (Airflow), imaging database (bossDB), and parameter database (Datajoint) to deploy workflows at scale, and tools to automate deployment and optimization of neuroimaging pipelines.
Proper citation: Scalable Analytics for Brain Exploration Research (RRID:SCR_018812) Copy
https://jtremblay.github.io/amplicontagger.html
Software tool as rRNA marker gene amplicon pipeline coded in python framework that enables fine tuning and integration of virtually any potential rRNA gene amplicon bioinformatic procedure. Designed to work within HPC environment, supporting complex network of job dependencies with smart restart mechanism in case of job failure or parameter modifications.
Proper citation: AmpliconTagger (RRID:SCR_019112) Copy
https://github.com/pensoft/omicsdatapaper
Software package for streamlined import of omics metadata from European Nucleotide Archive into OMICS Data Paper manuscript. Omics Data Paper R Shiny app demonstrates workflow for automatic import of ENA genomic metadata into omics data paper manuscript. Streamlined conversion of metadata into manuscript facilitates authoring of omics data papers, which allow omics dataset creators to receive credit for their work and to improve description and visibility of their datasets.
Proper citation: Omics Data Paper Generator (RRID:SCR_019809) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the RRID Resources search. From here you can search through a compilation of resources used by RRID and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that RRID has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on RRID then you can log in from here to get additional features in RRID such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into RRID you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within RRID that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.