Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.
SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.
http://code.google.com/p/panda-tool/
A matlab toolbox for pipeline processing of diffusion MRI images. For each subject, PANDA can provide outputs in 2 types: i) diffusion parameter data that is ready for statistical analysis; ii) brain anatomical networks constructed by using diffusion tractography. Particularly, there are 3 types of resultant diffusion parameter data: WM atlas-level, voxel-level and TBSS-level. The brain network generated by PANDA has various edge definitions, e.g. fiber number, length, or FA-weighted. The key advantages of PANDA are as follows: # fully-automatic processing from raw DICOM/NIFTI to final outputs; # Supporting both sequential and parallel computation. The parallel environment can be a single desktop with multiple-cores or a computing cluster with a SGE system; # A very friendly GUI (graphical user interface).
Proper citation: PANDA (RRID:SCR_002511) Copy
http://www.nitrc.org/projects/tapir/
A set of command line tools allowing 2D and 3D image registration, mainly for medical imaging (although also relevant to other image registration problems).
Proper citation: TAPIR (RRID:SCR_002596) Copy
A toolbox with graphical user interfaces for processing infant brain MR images. Longitudinal (or single-time-point) multimodality (including T1, T2, and FA) (or single-modality) data can be processed using the toolbox. Main functions of the software (step by step) include image preprocessing, brain extraction, tissue segmentation and brain labeling. Linux operating system (64 bit) is required. A workstation or server with memory >8G is recommended for processing many images simutaneously. The graphical user interfaces and overall framework of the software are implemented in MATLAB. The image processing functions are implemented with the combination of C/C++, MATLAB, Perl and Shell languages. Parallelization technologies are used in the software to speed up image processing.
Proper citation: iBEAT (RRID:SCR_002470) Copy
An automated online framework for performing validation studies of skull-stripping methods. Registered users may download 40 T1 MRI volumes, skull-strip them with the algorithm of their choice, and upload their segmentation results to the SVE website. The server will then compare the 40 skull-stripped results against a set of manually generated brain masks. The server computes a series of measures for the uploaded data, including Jaccard and Dice measures. It also produces images for visualizing the spatial location of the segmentation errors relative to a common space. The results are archived on the server, and the measures are viewable by visitors to the site.
Proper citation: Segmentation Validation Engine (RRID:SCR_002591) Copy
http://www.dartmouth.edu/~nir/nirfast/
A software package for modeling Near-Infrared light transport in tissue and image reconstruction. This includes: Standard single wavelength absorption and reduced scatter, Multi-wavelength spectrally constrained models and Fluorescence models.
Proper citation: Nirfast (RRID:SCR_002503) Copy
Project to develop software tools and provide shared image validation databases for rigorous testing of non-rigid image registration algorithms. NIREP will extend the scope of prior validation projects by developing evaluation criteria and metrics using large image populations, using richly annotated image databases, using computer simulated data, and increasing the number and types of evaluation criteria. The goal of this project is to establish, maintain, and endorse a standardized set of relevant benchmarks and metrics for performance evaluation of nonrigid image registration algorithms. Furthermore, these standards will be incorporated into an exportable computer program to automatically evaluate the registration accuracy of nonrigid image registration algorithms.
Proper citation: Non-Rigid Image Registration Evaluation Project (RRID:SCR_002505) Copy
http://www.cognitiveatlas.org/
Knowledge base (or ontology) that characterizes the state of current thought in cognitive science that captures knowledge from users with expertise in psychology, cognitive science, and neuroscience. There are two basic kinds of knowledge in the knowledge base. Terms provide definitions and properties for individual concepts and tasks. Assertions describe relations between terms in the same way that a sentence describes relations between parts of speech. The goal is to develop a knowledge base that will support annotation of data in databases, as well as supporting improved discourse in the community. It is open to all interested researchers. A fundamental feature of the knowledge base is the desire and ability to capture not just agreement but also disagreement regarding definitions and assertions. Thus, if you see a definition or assertion that you disagree with, then you can assert and describe your disagreement. The project is led by Russell Poldrack, Professor of Psychology and Neurobiology at the University of Texas at Austin in collaboration with the UCLA Center for Computational Biology (A. Toga, PI) and UCLA Consortium for Neuropsychiatric Phenomics (R. Bilder, PI). Most tasks used in cognitive psychology research are not identical across different laboratories or even within the same laboratory over time. A major advantage of anchoring cognitive ontologies to the measurement level is that the strategy for determining changes in task properties is easier than tracking changes in concept definitions and usage. The process is easier because task parameters are usually (if not always) operationalized objectively, offering a clear basis to judge divergence in methods. The process is also easier because most tasks are based on prior tasks, and thus can more readily be considered descendants in a phylogenetic sense.
Proper citation: Cognitive Atlas (RRID:SCR_002793) Copy
http://www.fmrib.ox.ac.uk/fsl/
Software library of image analysis and statistical tools for fMRI, MRI and DTI brain imaging data. Include registration, atlases, diffusion MRI tools for parameter reconstruction and probabilistic taractography, and viewer. Several brain atlases, integrated into FSLView and Featquery, allow viewing of structural and cytoarchitectonic standard space labels and probability maps for cortical and subcortical structures and white matter tracts. Includes Harvard-Oxford cortical and subcortical structural atlases, Julich histological atlas, JHU DTI-based white-matter atlases, Oxford thalamic connectivity atlas, Talairach atlas, MNI structural atlas, and Cerebellum atlas.
Proper citation: FSL (RRID:SCR_002823) Copy
THIS RESOURCE IS NO LONGER IN SERVICE, documented on May 11, 2016. Repository of brain-mapping data (surfaces and volumes; structural and functional data) derived from studies including fMRI and MRI from many laboratories, providing convenient access to a growing body of neuroimaging and related data. WebCaret is an online visualization tool for viewing SumsDB datasets. SumsDB includes: * data on cerebral cortex and cerebellar cortex * individual subject data and population data mapped to atlases * data from FreeSurfer and other brainmapping software besides Caret SumsDB provides multiple levels of data access and security: * Free (public) access (e.g., for data associated with published studies) * Data access restricted to collaborators in different laboratories * Owner-only access for work in progress Data can be downloaded from SumsDB as individual files or as bundles archived for offline visualization and analysis in Caret WebCaret provides online Caret-style visualization while circumventing software and data downloads. It is a server-side application running on a linux cluster at Washington University. WebCaret "scenes" facilitate rapid visualization of complex combinations of data Bi-directional links between online publications and WebCaret/SumsDB provide: * Links from figures in online journal article to corresponding scenes in WebCaret * Links from metadata in WebCaret directly to relevant online publications and figures
Proper citation: SumsDB (RRID:SCR_002759) Copy
An MRI data repository that holds a set of 7 Tesla images and behavioral metadata. Multi-faceted brain image archive with behavioral measurements. For each participant a number of different scans and auxiliary recordings have been obtained. In addition, several types of minimally preprocessed data are also provided. The full description of the data release is available in a dedicated publication. This project invites anyone to participate in a decentralized effort to explore the opportunities of open science in neuroimaging by documenting how much (scientific) value can be generated out of a single data release by publication of scientific findings derived from a dataset, algorithms and methods evaluated on this dataset, and/or extensions of this dataset by acquisition and integration of new data.
Proper citation: studyforrest.org (RRID:SCR_003112) Copy
A community database of published functional and structural neuroimaging experiments with both metadata descriptions of experimental design and activation locations in the form of stereotactic coordinates (x,y,z) in Talairach or MNI space. BrainMap provides not only data for meta-analyses and data mining, but also distributes software and concepts for quantitative integration of neuroimaging data. The goal of BrainMap is to develop software and tools to share neuroimaging results and enable meta-analysis of studies of human brain function and structure in healthy and diseased subjects. It is a tool to rapidly retrieve and understand studies in specific research domains, such as language, memory, attention, reasoning, emotion, and perception, and to perform meta-analyses of like studies. Brainmap contains the following software: # Sleuth: database searches and Talairach coordinate plotting (this application requires a username and password) # GingerALE: performs meta-analyses via the activation likelihood estimation (ALE) method; also converts coordinates between MNI and Talairach spaces using icbm2tal # Scribe: database entry of published functional neuroimaging papers with coordinate results
Proper citation: brainmap.org (RRID:SCR_003069) Copy
A hierarchy of portable online interactive aids for motivating, modernizing probability and statistics applications. The tools and resources include a repository of interactive applets, computational and graphing tools, instructional and course materials. The core SOCR educational and computational components include the following suite of web-based Java applets: * Distributions (interactive graphs and calculators) * Experiments (virtual computer-generated games and processes) * Analyses (collection of common web-accessible tools for statistical data analysis) * Games (interfaces and simulations to real-life processes) * Modeler (tools for distribution, polynomial and spectral model-fitting and simulation) * Graphs, Plots and Charts (comprehensive web-based tools for exploratory data analysis), * Additional Tools (other statistical tools and resources) * SOCR Java-based Statistical Computing Libraries * SOCR Wiki (collaborative Wiki resource) * Educational Materials and Hands-on Activities (varieties of SOCR educational materials), * SOCR Statistical Consulting In addition, SOCR provides a suite of tools for volume-based statistical mapping (http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_AnalysesCommandLine) via command-line execution and via the LONI Pipeline workflows (http://www.nitrc.org/projects/pipeline). Course instructors and teachers will find the SOCR class notes and interactive tools useful for student motivation, concept demonstrations and for enhancing their technology based pedagogical approaches to any study of variation and uncertainty. Students and trainees may find the SOCR class notes, analyses, computational and graphing tools extremely useful in their learning/practicing pursuits. Model developers, software programmers and other engineering, biomedical and applied researchers may find the light-weight plug-in oriented SOCR computational libraries and infrastructure useful in their algorithm designs and research efforts. The three types of SOCR resources are: * Interactive Java applets: these include a number of different applets, simulations, demonstrations, virtual experiments, tools for data visualization and analysis, etc. All applets require a Java-enabled browser (if you see a blank screen, see the SOCR Feedback to find out how to configure your browser). * Instructional Resources: these include data, electronic textbooks, tutorials, etc. * Learning Activities: these include various interactive hands-on activities. * SOCR Video Tutorials (including general and tool-specific screencasts).
Proper citation: Statistics Online Computational Resource (RRID:SCR_003378) Copy
A freely available software tool available for the Windows and Linux platform, as well as the Online version Applet, for the analysis, comparison and search of digital reconstructions of neuronal morphologies. For the quantitative characterization of neuronal morphology, LM computes a large number of neuroanatomical parameters from 3D digital reconstruction files starting from and combining a set of core metrics. After more than six years of development and use in the neuroscience community, LM enables the execution of commonly adopted analyses as well as of more advanced functions, including: (i) extraction of basic morphological parameters, (ii) computation of frequency distributions, (iii) measurements from user-specified subregions of the neuronal arbors, (iv) statistical comparison between two groups of cells and (v) filtered selections and searches from collections of neurons based on any Boolean combination of the available morphometric measures. These functionalities are easily accessed and deployed through a user-friendly graphical interface and typically execute within few minutes on a set of 20 neurons. The tool is available for either online use on any Java-enabled browser and platform or may be downloaded for local execution under Windows and Linux.
Proper citation: L-Measure (RRID:SCR_003487) Copy
http://neuroscienceblueprint.nih.gov/
Collaborative framework that includes the NIH Office of the Director and the 14 NIH Institutes and Centers that support research on the nervous system. By pooling resources and expertise, the Blueprint identifies cross-cutting areas of research, and confronts challenges too large for any single Institute or Center. The Blueprint makes collaboration a day-to-day part of how the NIH does business in neuroscience, complementing the basic missions of Blueprint partners. During each fiscal year, the partners contribute a small percentage of their funds to a common pool. Since the Blueprint's inception in 2004, this pool has comprised less than 1 percent of the total neuroscience research budget of the partners. In 2009, the Blueprint Grand Challenges were launched to catalyze research with the potential to transform our basic understanding of the brain and our approaches to treating brain disorders. * The Human Connectome Project is an effort to map the connections within the healthy brain. It is expected to help answer questions about how genes influence brain connectivity, and how this in turn relates to mood, personality and behavior. The investigators will collect brain imaging data, plus genetic and behavioral data from 1,200 adults. They are working to optimize brain imaging techniques to see the brain's wiring in unprecedented detail. * The Grand Challenge on Pain supports research to understand the changes in the nervous system that cause acute, temporary pain to become chronic. The initiative is supporting multi-investigator projects to partner researchers in the pain field with researchers in the neuroplasticity field. * The Blueprint Neurotherapeutics Network is helping small labs develop new drugs for nervous system disorders. The Network provides research funding, plus access to millions of dollars worth of services and expertise to assist in every step of the drug development process, from laboratory studies to preparation for clinical trials. Project teams across the U.S. have received funding to pursue drugs for conditions from vision loss to neurodegenerative disease to depression. Since its inception in 2004, the Blueprint has supported the development of new resources, tools and opportunities for neuroscientists. For example, the Blueprint supports several training programs to help students pursue interdisciplinary areas of neuroscience, and to bring students from underrepresented groups into the neurosciences. The Blueprint also funds efforts to develop new approaches to teaching neuroscience through K-12 instruction, museum exhibits and web-based platforms. From fiscal years 2007 to 2009, the Blueprint focused on three major themes of neuroscience - neurodegeneration, neurodevelopment, and neuroplasticity. These efforts enabled unique funding opportunities and training programs, and helped establish new resources including the Blueprint Non-Human Primate Brain Atlas.
Proper citation: NIH Blueprint for Neuroscience Research (RRID:SCR_003670) Copy
http://www.loni.usc.edu/BIRN/Projects/Mouse/
Animal model data primarily focused on mice including high resolution MRI, light and electron microscopic data from normal and genetically modified mice. It also has atlases, and the Mouse BIRN Atlasing Toolkit (MBAT) which provides a 3D visual interface to spatially registered distributed brain data acquired across scales. The goal of the Mouse BIRN is to help scientists utilize model organism databases for analyzing experimental data. Mouse BIRN has ended. The next phase of this project is the Mouse Connectome Project (https://www.nitrc.org/projects/mcp/). The Mouse BIRN testbeds initially focused on mouse models of neurodegenerative diseases. Mouse BIRN testbed partners provide multi-modal, multi-scale reference image data of the mouse brain as well as genetic and genomic information linking genotype and brain phenotype. Researchers across six groups are pooling and analyzing multi-scale structural and functional data and integrating it with genomic and gene expression data acquired from the mouse brain. These correlated multi-scale analyses of data are providing a comprehensive basis upon which to interpret signals from the whole brain relative to the tissue and cellular alterations characteristic of the modeled disorder. BIRN's infrastructure is providing the collaborative tools to enable researchers with unique expertise and knowledge of the mouse an opportunity to work together on research relevant to pre-clinical mouse models of neurological disease. The Mouse BIRN also maintains a collaborative Web Wiki, which contains announcements, an FAQ, and much more.
Proper citation: Mouse Biomedical Informatics Research Network (RRID:SCR_003392) Copy
http://biosig.sourceforge.net/
Software library for processing of electroencephalogram (EEG) and other biomedical signals like electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), respiration, and so on. Biosig contains tools for quality control, artifact processing, time series analysis, feature extraction, classification and machine learning, and tools for statistical analysis. Many tools are able to handle data with missing values (statistics, time series analysis, machine learning). Another feature is that more then 40 different data formats are supported, and a number of converters for EEG,, ECG and polysomnography are provided. Biosig has been widely used for scientific research on EEG-based BraiN-Computer Interfaces (BCI), sleep research, and ECG and HRV analysis. It provides software interfaces several programming languages (C, C++, Matlab/Octave, Python), and it provides also an interactive viewing and scoring software for adding, and editing of annotations, markers and events.
Proper citation: BioSig: An Imaging Bioinformatics System for Phenotypic Analysis (RRID:SCR_008428) Copy
An information extracting and processing package for biological literature that can be used online or installed locally via a downloadable software package, http://www.textpresso.org/downloads.html Textpresso's two major elements are (1) access to full text, so that entire articles can be searched, and (2) introduction of categories of biological concepts and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., methods, etc). A search engine enables the user to search for one or a combination of these categories and/or keywords within an entire literature. The Textpresso project serves the biological and biomedical research community by providing: * Full text literature searches of model organism research and subject-specific articles at individual sites. Major elements of these search engines are (1) access to full text, so that the entire content of articles can be searched, and (2) search capabilities using categories of biological concepts and classes that relate two objects (e.g., association, regulation, etc.) or identify one (e.g., cell, gene, allele, etc). The search engines are flexible, enabling users to query the entire literature using keywords, one or more categories or a combination of keywords and categories. * Text classification and mining of biomedical literature for database curation. They help database curators to identify and extract biological entities and facts from the full text of research articles. Examples of entity identification and extraction include new allele and gene names and human disease gene orthologs; examples of fact identification and extraction include sentence retrieval for curating gene-gene regulation, Gene Ontology (GO) cellular components and GO molecular function annotations. In addition they classify papers according to curation needs. They employ a variety of methods such as hidden Markov models, support vector machines, conditional random fields and pattern matches. Our collaborators include WormBase, FlyBase, SGD, TAIR, dictyBase and the Neuroscience Information Framework. They are looking forward to collaborating with more model organism databases and projects. * Linking biological entities in PDF and online journal articles to online databases. They have established a journal article mark-up pipeline that links select content of Genetics journal articles to model organism databases such as WormBase and SGD. The entity markup pipeline links over nine classes of objects including genes, proteins, alleles, phenotypes, and anatomical terms to the appropriate page at each database. The first article published with online and PDF-embedded hyperlinks to WormBase appeared in the September 2009 issue of Genetics. As of January 2011, we have processed around 70 articles, to be continued indefinitely. Extension of this pipeline to other journals and model organism databases is planned. Textpresso is useful as a search engine for researchers as well as a curation tool. It was developed as a part of WormBase and is used extensively by C. elegans curators. Textpresso has currently been implemented for 24 different literatures, among them Neuroscience, and can readily be extended to other corpora of text.
Proper citation: Textpresso (RRID:SCR_008737) Copy
http://nrg.wustl.edu/projects/fiv
A tool for visualizing functional and anatomic MRI data.
Proper citation: FIV (RRID:SCR_009575) Copy
http://www.nitrc.org/projects/ccseg/
An open-source C++-based application that allows automatic as well as user-interactive segmentation of the Corpus Callosum. Via a Qt-based graphical user interface, CCSeg also performs semi-automatic segmentation.
Proper citation: CCSeg - Corpus Callosum Segmentation (RRID:SCR_009453) Copy
http://www.ncigt.org/pages/Research_Projects/ImagingCoreToolbox/Imaging_Toolkit
This software provides algorithms for the reconstruction of raw MR data. In particular, it supports the reconstruction of accelerated data acquisitions where k-space is subsampled and the Fourier domain encoding is complemented by temporal encoding, spatial encoding, or and/or a constrained reconstruction. This library of functions provides a number of reconstruction algorithms that accurately employ advanced MR imaging methods including: UNFOLD; parallel imaging methods such as SENSE and GRAPPA; Homodyne processing of partial-Fourier data, and gradient field inhomogeneity correction (gradwarp); EPI Nyquist Ghost correction and ramp-sampling gridding. The target audience is research groups who may be interested in exploring or employing advanced MR reconstruction techniques, but don't have the necessary expertise in-house. Inquires may be directed to: ncigt-imaging-toolkit -at- bwh.harvard.edu
Proper citation: NCIGT Fast Imaging Library (RRID:SCR_009609) Copy
Can't find your Tool?
We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.
Welcome to the RRID Resources search. From here you can search through a compilation of resources used by RRID and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that RRID has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on RRID then you can log in from here to get additional features in RRID such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into RRID you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the sources that were queried against in your search that you can investigate further.
Here are the categories present within RRID that you can filter your data on
Here are the subcategories present within this category that you can filter your data on
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.