Searching across hundreds of databases

Our searching services are busy right now. Please try again later

  • Register
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X

Leaving Community

Are you sure you want to leave this community? Leaving the community will revoke any permissions you have been granted in this community.

No
Yes
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.

Search

Type in a keyword to search

On page 5 showing 81 ~ 100 out of 24,974 results
Snippet view Table view Download Top 1000 Results
Click the to add this resource to a Collection
  • RRID:SCR_008519

    This resource has 1+ mentions.

http://www.flintbox.com/technology.asp?page=3716

Welcome to Flintbox, an application that revolutionizes the way the innovation community can share technologies, distribute new materials and software, and collaborate on research projects. Hundreds of research institutions are participating in the Flintbox open innovation network. Become a member to contribute and explore We are excited to present the Flintbox Application Programming Interface (API) to provide users with an easy way to upload data from any system to Flintbox. This freely available tool for synchronized information exchange will be a valuable resource to the open innovation community. The Flintbox API gives you the power to: Import existing technology postings directly into Flintbox Avoid duplicate data entry to market your technologies, ideas, and materials Create postings from any database Export project information to other websites For more information about using and implementing the Flintbox API, please see the About the Flintbox API page. Featured Project The New Flintbox offers expanded transactional capabilities: credit cards, purchase orders, purchase authorizations, donations, and much more The global Flintbox community is ideal for marketing new technologies, creative works, course materials, and innovative ideas. Flintbox enables universities to maximize their outreach to a broad spectrum of the innovation community. Plus, Wellspring provides expertise in assisting TTOs, inventors, and industrial liaison offices to promote their technologies, research programs, and partnering opportunities to Flintbox. Flintbox for Corporations Corporations turn to Flintbox to cultivate existing relationships and engage in new opportunities for collaborative research and solution sourcing from universities and other companies. For example, a pharmaceutical firm uses Flintbox to provide researchers and physicians software to assess patient outcomes. Flintbox for Technology Communities Flintbox empowers technology communities to connect effectively in a geographic region, building a Sphere of Innovation to truly recognize and capitalize on the valuable relationships and innovation assets in your community. For example, by creating or joining a community Group of members with common interests, say tissue engineering or artificial intelligence, you can link to other groups and explore common interests and complementary resources, for a dynamic collaborative effort and to efficiently share related technology projects. Flintbox is a registered trademark of Wellspring Worldwide, LLC

Proper citation: Flintbox (RRID:SCR_008519) Copy   


  • RRID:SCR_008514

    This resource has 1+ mentions.

http://www.fltk.org

This document defines the processes and standards that all FLTK developers must follow when developing and documenting FLTK, and how trouble reports are handled and releases are generated. The purpose of defining formal processes and standards is to organize and focus our development efforts, ensure that all developers communicate and develop software with a common vocabulary/style, and make it possible for us to generate and release a high-quality GUI toolkit which can be used with a high degree of confidence. Much of this file describes the existing practices that have been used up through FLTK 1.1.x, however I have also added some new processes/standards to use for future code and releases. The fltk-dev mailing list and fltk.development newsgroup are the primary means of communication between developers. All major design changes must be discussed prior to implementation. Specific Goals The specific goals of the FLTK are as follows: Develop a C++ GUI toolkit based upon sound object-oriented design principles and experience. (*) Minimize CPU usage (fast). (*) Minimize memory usage (light). (*) Support multiple operating systems and windowing environments, including UNIX/Linux, MacOS X, Microsoft Windows, and X11, using the native graphics interfaces. (*) Support OpenGL rendering in environments that provide it. (*) Provide a graphical development environment for designing GUI interfaces, classes, and simple programs. (*) Support UTF-8 text. Support printer rendering in environments that provide it. Support schemes, styles, themes, skinning, etc. to alter the appearance of widgets in the toolkit easily and efficiently. The purpose is to allow applications to tailor their appearance to the underlying OS or based upon personal/user preferences. Support newer C++ language features, such as templating via the Standard Template Library (STL), and certain Standard C++ library interfaces, such as streams. However, FLTK will not depend upon such features and interfaces to minimize portability issues. Support intelligent layout of widgets. Many of these goals are satisfied by FLTK 1.1.x (*), and many complex applications have been written using FLTK on a wide range of platforms and devices. Development of the remaining features is proceding for FLTK 2.0 with a new, namespace-based API. While 2.0 offers some limited 1.x source compatibility, the changes to the underlying widget classes are significant enough to prevent full compatibility. Software Development Practices Documentation All widgets are documented using the Doxygen software; Doxygen comments are placed in the header file for the class comments and any inline methods, while non-inline methods should have their comments placed in the corresponding source file. The purpose of this separation is to place the comments near the implementation to reduce the possibility of the documentation getting out of sync with the code. All widgets must have a corresponding test program which exercises all widget functionality and can be used to generate image(s) for the documentation. Complex widgets must have a written tutorial, either as full text or an outline for later publication The final manuals are formatted using the HTMLDOC software. Sponsor. Easy Software Products

Proper citation: Fast Light Toolkit (RRID:SCR_008514) Copy   


  • RRID:SCR_008517

    This resource has 1+ mentions.

http://fluxus-technology.com/sharenet.htm

DNA software and consultancy: The DNA Alignment software and Network software is used by biologists, anthropologists, medical researchers and students world wide. We carry out phylogeographic consultancy for US, UK and German clients, including legal medical work. We were involved in the tv projects The Real Eve (Discovery Channel) and Motherland (BBC). Our biotechnological director Dr Peter Forster is on the editorial board of the International Journal of Legal Medicine since 1999. Technology and sales consultancy: Clients include multinational corporations, research institutions, and medium to small businesses. Client quotes: Vorbildlicher Einsatz (Dr Stephan Hitzel, EADS, in CADplus 1/2003 journal, Cover Story). We have had an effective business relationship with Fluxus Technology since 1999, and their experience of the German market has proved to be invaluable as part of our operations supplying high end engineering software and consultancy services right across the engineering supply chain (Andy Chinn, Business Development Manager, ITI TranscenData, February 2006). Abstract. Indo-European is the largest and best-documented language family in the world, yet the reconstruction of the Indo-European tree, first proposed in 1863, has remained controversial. Complications may include ascertainment bias when choosing the linguistic data, and disregard for the wave model of 1872 when attempting to reconstruct the tree. Essentially analogous problems were solved in evolutionary genetics by DNA sequencing and phylogenetic network methods, respectively. We now adapt these tools to linguistics, and analyze Indo-European language data, focusing on Celtic and in particular on the ancient Celtic language of Gaul (modern France), by using bilingual GaulishLatin inscriptions. Our phylogenetic network reveals an early split of Celtic within Indo-European. Interestingly, the next branching event separates Gaulish (Continental Celtic) from the British (Insular Celtic) languages, with Insular Celtic subsequently splitting into Brythonic (Welsh, Breton) and Goidelic (Irish and Scottish Gaelic). Taken together, the network thus suggests that the Celtic language arrived in the British Isles as a single wave (and then differentiated locally), rather than in the traditional two-wave scenario (P-Celtic to Britain and Q-Celtic to Ireland). The phylogenetic network furthermore permits the estimation of time in analogy to genetics, and we obtain tentative dates for Indo-European at 8100 BC 1,900 years, and for the arrival of Celtic in Britain at 3200 BC 1,500 years. The phylogenetic method is easily executed by hand and promises to be an informative approach for many problems in historical linguistics.

Proper citation: Fluxus (RRID:SCR_008517) Copy   


  • RRID:SCR_008474

    This resource has 10+ mentions.

http://www.daylight.com/dayhtml/doc/theory/

Daylight provides enterprise-level cheminformatics software technologies to life science companies. Our superior chemistry, high performance, and open architecture have earned Daylight a reputation for delivering the state-of-the-art in chemical information processing since 1987. Daylight Chemical Information Systems, Inc. is a privately held company with corporate offices in Aliso Viejo, CA and research offices in Santa Fe, NM and Cambridge, England. Support At Daylight, support means a wide array of services that are designed to empower users to make the most of Daylight software. We offer detailed administration documentation and guides through this website. Our User Group Meetings allow in-depth exploration of our technology. And, of course, our support staff is available to assist you whenever the need occurs. Download - Downloading current releases as well as contributed code, system requirements, installation directions, and release information Reference Guides - List of available documentation such as programming guides and user manuals. Cheminformatics - List of additional general resources including introductory materials, theory manual, tutorials and user meeting archives. Sponsor. Daylight

Proper citation: Daylight (RRID:SCR_008474) Copy   


  • RRID:SCR_008470

    This resource has 10+ mentions.

http://helixweb.nih.gov/dnaworks

DNAWorks automates the design of oligonucleotides for gene synthesis by PCR-based methods. The availability of sequences of entire genomes has dramatically increased the number of protein targets, many of which will need to be overexpressed in cells other than the original source of DNA. Gene synthesis often provides a fast and economically efficient approach. The synthetic gene can be optimized for expression and constructed for easy mutational manipulation without regard to the parent genome. DNAWorks accesses a computer program that automates the design of oligonucleotides for gene synthesis. The website provides forms for simple input information, i.e. amino acid sequence of the target protein and melting temperature (needed for the gene assembly) of synthetic oligonucleotides. The program outputs a series of oligonucleotide sequences with codons optimized for expression in an organism of choice. Those oligonucleotides are characterized by highly homogeneous melting temperatures and a minimized tendency for hairpin formation. The approach presented here simplifies the production of proteins from a wide variety of organisms for genomics-based studies.

Proper citation: DNAWorks at Helix Systems (RRID:SCR_008470) Copy   


  • RRID:SCR_008548

    This resource has 1+ mentions.

http://seqpig.sourceforge.net/

A software library for Apache Pig for the distributed analysis of large sequencing datasets on Hadoop clusters.

Proper citation: SeqPig (RRID:SCR_008548) Copy   


  • RRID:SCR_008549

    This resource has 50+ mentions.

http://rmaexpress.bmbolstad.com

RMAExpress is a standalone GUI program for Windows (and Linux) to compute gene expression summary values for Affymetrix Genechip data using the Robust Multichip Average expression summary and to carry out quality assessment using probe-level metrics. It does not require R nor is it dependent on any component of the BioConductor project. If focuses on processing 3'' IVT expression arrays, exon and WT gene arrays. What is RMA? RMA is the Robust Multichip Average. It consists of three steps: a background adjustment, quantile normalization (see the Bolstad et al reference) and finally summarization. Some references (currently published) for the RMA methodology are: Bolstad, B.M., Irizarry R. A., Astrand, M., and Speed, T.P. (2003), A Comparison of Normalization Methods for High Density Oligonucleotide Array Data Based on Bias and Variance. Bioinformatics 19(2):185-193 Supplemental information Rafael. A. Irizarry, Benjamin M. Bolstad, Francois Collin, Leslie M. Cope, Bridget Hobbs and Terence P. Speed (2003), Summaries of Affymetrix GeneChip probe level data Nucleic Acids Research 31(4):e15 Irizarry, RA, Hobbs, B, Collin, F, Beazer-Barclay, YD, Antonellis, KJ, Scherf, U, Speed, TP (2002) Exploration, Normalization, and Summaries of High Density Oligonucleotide Array Probe Level Data. Accepted for publication in Biostatistics. [Abstract, PDF, PS, Complementary Color Figures-PDF, Software] What do I need? You will need the appropriate CDF and CEL files for your dataset. For Exon and WT Gene arrays, the PGF and CLF should be used instead of the CDF file to build a CDFRME file. The process for doing this is explained in the user manual. Some pre-built CDFRME files are also available. CDFRME files HuEx_CDFRME.zip (95.9MB) HuGene_CDFRME.zip (5.5MB) MoEx_CDFRME.zip (79.6MB) MoGene_CDFRME.zip (6.3MB) RaEx_CDFRME.zip (48.4MB) RaGene_CDFRME.zip (5.7MB) Can I use affy/BioConductor instead? Of course. Hypothetically you will get the same results from both places, provided you have consistent settings in affy/BioConductor and RMAExpress. Some people prefer the power and flexibility of R and others like the point and click simplicity of a GUI. RMAExpress caters to the second option. Since RMAExpress outputs the computed expression values to a text file, you may of course load the expression measures into R and use features of Bioconductor for the analysis of your gene expression values. You can of course open the results file in any other application that supports importing plain text files. Will I get the same results as I would using affy/Bioconductor? Yes. The results from RMAExpress should be consistent. What are the machine requirements? A good rule of thumb is the more RAM you have the better. I would recommend at least 1GB, though 512MB will work in most situations. At this point the program has been tested using Windows 2000, Windows XP, Windows Vista and Linux. Most recently I have had a report of over 10,000 arrays processed in a single session. Can I do any quality assessment? Yes, store the residuals when you compute the expression values. Then you may examine chip pseudo-images of the residuals. Note that high positive residuals are colored increasingly read and low negative residuals are colored increasingly blue. To better interpret these images and gain a better feel for what is typical you may visit the PLM Image Gallery where images for a number of different datasets are shown. Access to the NUSE and RLE quality assessment metrics is also provided. How do I download and install it? Click here for the current release Windows version. Use the installer to install the program. The current release version number is 1.0 (released June 29, 2008). A pre-built linux version is not currently available, but you may build it using the source code. You can download pre-release versions from the following table (the release versions will be more stable, the development versions may have features that are incomplete or will be removed or altered before the next release was supported by the PGA U01 HL66583.

Proper citation: RMA Express (RRID:SCR_008549) Copy   


http://statgen.ncsu.edu/qtlcart/

QTL Cartographer is a suite of programs to map quantitative traits using a map of molecular markers. The programs are available via an anonymous ftp server. See the README for more information. You will also want a copy of Gnuplot to display plots made by QTL Cartographer. Gnuplot is freely available on the web. Do a search to find the latest version for your operating system. Windows QTL Cartographer Windows QTL Cartographer is a user friendly version of QTL Cartographer. It has a GUI interface and runs under Microsoft Windows. Manual The manual for QTL Cartographer is written in LaTeX2e. An Adobe Portable document format (pdf) version is available with the distribution of the programs. Look in the doc/pdf folder for the manual.pdf file. This file can be printed or viewed using Acrobat Reader, available through the Adobe website. The manual has also been translated into html. It is available through the following link. Please note that the translator is not perfect: The pdf form of the manual is much more accurate. Specifically, latex2html failed to translate figure 2.4 and simply printed 2.3 twice. Man Pages In the UNIX world, it is comman to have man pages for programs. We have written such a set of man pages, and these are available with the UNIX distribution. The man pages are also a part of the manual.pdf file. Here is a list of the man pages. 1. Emap 2. Rmap 3. Rqtl 4. Rcross 5. Qstats 6. LRmapqtl 7. SRmapqtl 8. Zmapqtl 9. JZmapqtl 10. MImapqtl 11. MultiRegress 12. Prune 13. Preplot 14. Eqtl 15. QTLcart Perl scripts QTL Cartographer comes with some perl scripts to automate repetitive tasks and reformat output files. They are available in the doc/scripts subdirectory of the distribution. Here are the man pages that explain what the scripts can do. 1. Bootstrap.pl is a script for running a bootstrap analysis. 2. CWTupdate.pl is used with Permute.pl for the comparison-wise thresholds. 3. EWThreshold.pl is used with Permute.pl for the experiment-wise thresholds. 4. GetMaxLR.pl is used with Permute.pl for the experiment-wise thresholds. 5. Model8.pl iterates Zmapqtl to find a stable set of cofactors for composite interval mapping. 6. Permute.pl is a script for running a permutation test. 7. Prepraw.pl allows you to reformat and check a Mapmaker data file. 8. SRcompare.pl will compare the set of cofactors in two SRmapqtl output files. 9. SSupdate.pl is used with Bootstrap.pl to update the sum and sum of squares for the likelihoods and parameter estimates. 10. Vert.pl converts text file line endings between Unix, Macintosh and Windows. 11. Ztrim.pl redisplays Zmapqtl output so that it fits in a terminal window. Data We are now posting published data sets to our web site. A list of links to the ftp subdirectories follows. Each directory contains a set of text files of data. Please read the Readme file in the directory for information on the data. 1. Zeng et al provide data for their paper Genetic architecture of a morphological shape difference between two Drosophila species. If you have any data that you would like to make available via our server, contact Chris Basten. Presentations From time to time, Chris Basten gives presentations on how to use QTL Cartographer. These presentations are created in Microsoft Powerpoint. The source file for the presentation is available with the distribution of the programs. Look in the doc subdirectory. Binary Traits See this for more information on the BTmapqtl module. This is an add-on written in LaurenMcIntyre''s lab. BTmapqtl is in the binary directory of the distribution (and is created with a make for the UNIX version).

Proper citation: Bionformatics Research Center (RRID:SCR_008540) Copy   


  • RRID:SCR_008421

    This resource has 10+ mentions.

http://mothra.ornl.gov/cgi-bin/cat/cat.cgi

A repository of tools for analysis and annotation of CAZYmes (Carbohydrate Active enZYmes).

Proper citation: CAT (RRID:SCR_008421) Copy   


  • RRID:SCR_008539

    This resource has 10000+ mentions.

http://www.qiagen.com

A commercial organization which provides assay technologies to isolate DNA, RNA, and proteins from any biological sample. Assay technologies are then used to make specific target biomolecules, such as the DNA of a specific virus, visible for subsequent analysis.

Proper citation: QIAGEN (RRID:SCR_008539) Copy   


  • RRID:SCR_008538

    This resource has 1+ mentions.

http://genomine.org/qvalue/

Features: * This software takes a list of p-values resulting from the simultaneous testing of many hypotheses and estimates their q-values. A point-and-click interface is now available! * The q-value of a test measures the proportion of false positives incurred (called the false discovery rate) when that particular test is called significant. * A short tutorial on q-values and false discovery rates is provided with the manual. * Various plots are automatically generated, allowing one to make sensible significance cut-offs. * Several mathematical results have recently been shown on the conservative accuracy of the estimated q-values from this software. * The software can be applied to problems in genomics, brain imaging, astrophysics, and data mining. This research was supported in part by a National Science Foundation graduate research fellowship.

Proper citation: Q-Value Software (RRID:SCR_008538) Copy   


  • RRID:SCR_008411

    This resource has 1+ mentions.

http://jakarta.apache.org/tomcat

Apache Tomcat is an open source software implementation of the Java Servlet and JavaServer Pages technologies. The Java Servlet and JavaServer Pages specifications are developed under the Java Community Process. Apache Tomcat is developed in an open and participatory environment and released under the Apache License version 2. Apache Tomcat is intended to be a collaboration of the best-of-breed developers from around the world. We invite you to participate in this open development project. Apache Tomcat powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations. Some of these users and their stories are listed on the PoweredBy wiki page.

Proper citation: Apache Tomcat (RRID:SCR_008411) Copy   


  • RRID:SCR_008530

    This resource has 50+ mentions.

https://apps.childrenshospital.org/clinical/research/ingber/GEDI/samples.htm

A program that opens a new perspective to the analysis of microarray data (e.g., gene expression profiling). Unlike traditional gene clustering software, GEDI is primarily sample-oriented rather than gene-oriented. By treating each high-dimensional sample, such as one microarray experiment, as an object, it accentuates the genome-wide response of a tissue or a patient and treats it as an integrated biological entity. Hence, GEDI honors the new spirit of a system-level approach in biology. Yet, it also allows the researcher to quickly zoom-in from global patterns onto individual genes that exhibit interesting expression behavior and retrieve gene-specific information. Therefore, GEDI unites a novel holistic perspective with the traditional gene-centered approach in molecular biology. GEDI allows experimental biologists or clinicians with no bioinformatics background to efficiently and intuitively navigate through a large number of expression profiles, each with a memorizable face, and inspect, group and collect them, like managing a stack of baseball cards. DYNAMIC ANALYSIS: The unique strength of GEDI, for which GEDI was originally developed, is that it can display the results of parallel monitoring of multiple high-dimensional time courses, such as the comparison of expression profile time evolution in response to a series of drugs. GEDI creates animated graphics showing how 10,000s of genes change their expression over time in response to 100s of separately tested drugs. STATIC ANAYLSIS: The signature graphical output of GEDI, the GEDI-mosaics provide a unique, one-glance visual engram that gives each microarray or other high-dimensional dataset a face. A characteristic of GEDI''s analysis is that it does not prejudicate any particular structure in the data (such as clusters or hierarchical organization). Thus, it allows the researcher to use human pattern recognition to perform a global first-level analysis of the data. Sponsor. The project was supported by the Air Force Office of Scientific Research and the National Health Institutes. It is distributed for free academic use by the Childrens Hospital, Boston.

Proper citation: GEDI (RRID:SCR_008530) Copy   


  • RRID:SCR_008566

http://www.widetag.com/

Headquartered in Redwood City California, WideTag is a pioneer in architecting computing systems that integrate sensors, positioning devices and memory with social, Web 2.0-style services in applications that revolutionize business and push consumer technology.

Proper citation: Wide Tag (RRID:SCR_008566) Copy   


  • RRID:SCR_008444

    This resource has 100+ mentions.

http://www.biokin.com/dynafit/

Program DynaFit Analysis of (bio)chemical kinetics and equilibria Welcome to the DynaFit home page. Purpose Symbolic Notation Bibliographic Reference Numerical Methods Minimum System Requirements Purpose The main purpose of the program DynaFit is to perform nonlinear least-squares regression of chemical kinetic, enzyme kinetic, or ligand-receptor binding data. The experimental data can be either initial reaction velocities in dependence on the concentration of varied species (e.g., inhibitor concentration vs. velocity), or the reaction progress curves (e.g., time vs. absorbance). Symbolic Notation The main advantage in using the program DynaFit is in the ability to characterize the (bio)chemical reacting system in terms of symbolic, or stoichiometric, equations. For example, the ``slow, tight'''' inhibition of a dissociative dimeric enzyme is described by the following text: Monomer Monomer <==> Enzyme : k1 k2 Enzyme Inhibitor <==> Complex : k3 k4 Enzyme Substrate <==> ReactiveX : k5 k6 ReactiveX --> Product Enzyme : k7 k8 The names of chemical species (Monomer, Enzyme, etc.) are entirely arbitrary and can be freely chosen by the investigator. Bibliographic Reference If you publish any results obtained by using DYNAFIT, plase cite the following reference: Kuzmic, P. (1996) Anal. Biochem. 237, 260-273. Program DYNAFIT for the Analysis of Enzyme Kinetic Data: Application to HIV Proteinase ABSTRACT A computer program with the code name DYNAFIT was developed for fitting either the initial velocities, or the time-course of enzyme reactions, to an arbitrary molecular mechanism represented symbolically by a set of chemical equations. Seven numerical tests and five graphical tests are applied to judge the goodness of fit. Experimental data on the inhibition of the dissociative dimeric proteinase from HIV were used in four test examples. A set of initial velocities was analyzed to see if a tight-binding inhibitor could bind to the HIV proteinase monomer. Three different sets of progress curves were analyzed (i) to determine the kinetic properties of an irreversible inhibitor; (ii) to investigate the dissociation and denaturation mechanism for the protease dimer; and (iii) to investigate the inhibition mechanism for a transient inhibitor. See a MEDLINE abstract with related references concerning the kinetics of HIV-1 protease. Numerical Methods The nonlinear regression module uses the Levenberg-Marquardt algorithm [1]. The time-course of (bio)chemical reactions is computed by the numerical integration of simultaous first-order ordinary differential equations, using the Livermore Solver of ODe Systems (LSODE, [2]). The composition of complex mixtures at equilibrium (e.g., in the concentration jump experiment where a complex mixture is incubated prior to the addition of a reagent) is computed by solving simultaneous nonlinear algebraic equations, namely, the mass balance equations for the component species, by using the multidimensional Newton-Raphson method [3]. References G. A. F. Seber and C. J. Wild (1989) Nonlinear Regression, Wiley, New York, p. 624. A. C. Hindmarsh (1983) ODEPACK: a systematized collection of ODE solvers; in Scientific Computing, ed. R. S. Stepleman et al., North Holland, Amsterdam, pp. 55--64. E. Kreyszig (1993) Advanced Engineering Mathematics; 7th ed., John Wiley, New York, p. 929. Minimum System Requirements DynaFit for Windows Intel Pentium III or Celeron class 800 MHz or faster processor Microsoft Windows XP (SP1) or 2000 (SP2) 128 MB RAM 20 MB Hard Disk Space Ethernet Network Interface Card required for license activation(1) CD/DVD-ROM drive required for software installation(2) (1) The Network Interface Card is used to compute a unique Computer ID, tied to a particular DynaFit license. Essentially the Computer ID required for license activation is an encrypted Media Access Control (MAC address) associated with the given Network Card. (2) CD/DVD-ROM is not required if the software is being installed by using the downloadable installer file dynafit-install.zip. Sponsor. This work has been supported by the NIH, grant No. R43 AI52587-02 and the U.S. Department of Defense, U.S. Army Medical Research and Materials Command, Ft. Detrick, MD, administered by the Pacific Telehealth & Technology Hui, Honolulu, HI, contract No. V549P-6073.

Proper citation: Program DynaFit (RRID:SCR_008444) Copy   


http://www.phru.nhs.uk/

THIS RESOURCE IS NO LONGER IN SERVICE, documented August 23, 2016. Tools were developed by the Critical Appraisal Skills Programme (CASP) to help with the process of critically appraising articles of the following types of research. These are available and free to download for personal use.

Proper citation: Public Health Resources Unit (RRID:SCR_008564) Copy   


  • RRID:SCR_008558

    This resource has 50+ mentions.

http://lagan.stanford.edu

About the LAGAN Toolkit The LAGAN Tookit consists of four components: CHAOS CHAOS is a pairwise local aligner optimized for non-coding, and other poorly conserved regions of the genome. It uses both exact matching and degenerate seeds, and is able to find homology in the presence of gaps. LAGAN LAGAN is our highly parametrizable pairwise global alignment program. It takes local alignments generated by CHAOS as anchors, and limits the search area of the Needleman-Wunsch algorithm around these anchors; Multi-LAGAN Multi-LAGAN is a generalization of the pairwise algorithm to multiple sequence alignment. M-LAGAN performs progressive pairwise alignments, guided by a user-specified phylogenetic tree. Alignments are aligned to other alignments using the sum-of-pairs metric. Shuffle-LAGAN Shuffle-LAGAN is a novel glocal alignment algorithm that is able to find rearrangements (inversions, transpositions and some duplications) in a global alignment framework. It uses CHAOS local alignments to build a map of the rearrangements between the sequences, and LAGAN to align the regions of conserved synteny. The website uses scripts written by Alex Poliakov. The website was designed by Marina Sirota.

Proper citation: LAGAN (RRID:SCR_008558) Copy   


  • RRID:SCR_008554

    This resource has 100+ mentions.

http://safcsupplysolutions.com

THIS RESOURCE IS NO LONGER IN SERVICE, documented August 23, 2016. A business division of Sigma-Aldrich Corporation, focusing on providing custom manufactured products and specialized services used in the industrial development and manufacturing, including processes, that bring new drugs and new electronic products to market.

Proper citation: SAFC (RRID:SCR_008554) Copy   


http://rocr.bioinf.mpi-sb.mpg.de/

ROCR is a package for evaluating and visualizing the performance of scoring classifiers in the statistical language R. It features over 25 performance measures that can be freely combined to create two-dimensional performance curves. Standard methods for investigating trade-offs between specific performance measures are available within a uniform framework, including receiver operating characteristic (ROC) graphs, precision/recall plots, lift charts and cost curves. ROCR integrates tightly with R''s powerful graphics capabilities, thus allowing for highly adjustable plots. Being equipped with only three commands and reasonable default values for optional parameters, ROCR combines flexibility with ease of usage. Performance measures that ROCR knows: Accuracy, error rate, true positive rate, false positive rate, true negative rate, false negative rate, sensitivity, specificity, recall, positive predictive value, negative predictive value, precision, fallout, miss, phi correlation coefficient, Matthews correlation coefficient, mutual information, chi square statistic, odds ratio, lift value, precision/recall F measure, ROC convex hull, area under the ROC curve, precision/recall break-even point, calibration error, mean cross-entropy, root mean squared error, SAR measure, expected cost, explicit cost. ROCR features: ROC curves, precision/recall plots, lift charts, cost curves, custom curves by freely selecting one performance measure for the x axis and one for the y axis, handling of data from cross-validation or bootstrapping, curve averaging (vertically, horizontally, or by threshold), standard error bars, box plots, curves that are color-coded by cutoff, printing threshold values on the curve, tight integration with Rs plotting facilities (making it easy to adjust plots or to combine multiple plots), fully customizable, easy to use (only 3 commands). ROCR can be used under the terms of the GNU General Public License. Running within R, it is platform-independent.

Proper citation: Classifier Visualization in R (RRID:SCR_008551) Copy   


http://cython.org/

Cython is a language that makes writing C extensions for the Python language as easy as Python itself. Cython is based on the well-known Pyrex, but supports more cutting edge functionality and optimizations. The Cython language is very close to the Python language, but Cython additionally supports calling C functions and declaring C types on variables and class attributes. This allows the compiler to generate very efficient C code from Cython code. This makes Cython the ideal language for wrapping external C libraries, and for fast C modules that speed up the execution of Python code. Sponsor. Google and Enthought funded Dag Seljebotn to greatly improve Cython integration with NumPy. Kurt Smith and Danilo Freitas were funded through the Google Summer of Code program to work on improved Fortran and C support respectively.

Proper citation: Cython C-Extensions for Python (RRID:SCR_008466) Copy   



Can't find your Tool?

We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.

Can't find the RRID you're searching for? X
  1. RRID Portal Resources

    Welcome to the RRID Resources search. From here you can search through a compilation of resources used by RRID and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that RRID has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on RRID then you can log in from here to get additional features in RRID such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into RRID you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Sources

    Here are the sources that were queried against in your search that you can investigate further.

  9. Categories

    Here are the categories present within RRID that you can filter your data on

  10. Subcategories

    Here are the subcategories present within this category that you can filter your data on

  11. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

X