SEND Implementation Wiki - Getting SEND-ready
This page is a high-level primer on getting your organization ready for SEND, with such information as where to start, things to consider, and so on, when beginning your SEND implementation and planning. |
---|
Training |
---|
It is recommended for anyone new to SEND to read through at least the SEND Fundamentals page, as this page goes over a lot of the key basics: The other fundamentals pages (linked at top) are also useful for understanding the concepts and basics behind controlled terminology and define file. In addition to that, training is available from a number of sources:
|
Vendor Evaluations |
---|
Vendor Solutions |
The following vendors offer SEND solutions: |
Deciding on a Vendor |
The vendors offer a number of features in their solutions to make it easy to handle the numerous aspects of data management, absorption, and production, and it is a good idea to evaluate multiple tools to see what's available, what best fits into your systems, as well as what kind of new functionality they provide (such as review/analysis). When on the hunt for a vendor, the following are useful to consider: What functionality you need
Consider which of these are important to you and ensure that your needs are met when evaluating products. Note: If you only need a way to open the files to view (for example, a Sponsor who contracts all SEND production out), then you may consider using the Universal Xpt File Viewer, which is a free tool which can open the XPT files. Note that it is extremely feature-limited. This tool was previously known as SAS Viewer, so if you already have the SAS Viewer, you can use that as well to open the files. The "How do I open XPT files?" on the FAQ page lists other methods as well, including using base SAS and R to open the files. What systems you have |
Sample Datasets |
---|
Accessing sample data sets is useful as a reference while learning the CDISC standard, for exercising tools or processes that use SEND datasets, or for verifying your own study datasets. With final guidance from the FDA on the horizon, seeing a complete SEND data set will help you be prepared. See the "Are there publicly available sample SEND datasets?" question on the FAQ for a list of places where you can obtain sample datasets. |
Working with CROs |
---|
Initiating the Relationship |
When working with CROs, there are several details which should be understood and agreed to in advance, from logistical considerations (like adding to the master agreement) to specific content capability questions (like method of creating the Exposure domain). Many of these considerations are advantageous to determine early on in the process. The SEND between Organizations page has questionnaires designed to help in this process of setting up and maintaining a partnership. |
Sponsor Responsibilities |
Next, even with a full service CRO, the Sponsor is ultimately responsible for the submission of SEND datasets. As such, there are some key responsibilities of the Sponsor with regard to the specification and handling of the SEND datasets that cannot be performed by the CRO or other external body. At a high-level, they are:
|
Internal Implementation Considerations |
---|
Forming Your Implementation Team |
Your implementation team will need data-savvy individuals from various parts of the organization. The following are suggested roles that can help make an implementation a success. Single individuals may fill multiple roles (especially the case with smaller organizations), and these roles may also be filled by outside help.
|
Mapping Exercises |
---|
Domain Mapping |
The following are recommended high-level steps for mapping to SEND. Regardless of whether you are planning for a vendor or homegrown solution, a mapping assessment is most likely necessary to get a handle on which systems are involved, changes that might need to be effected, terminology mapping, and so on.
Once these steps are done, you should have a high-level understanding of:
Some example deliverables which can assist and shape the process could be:
|
Controlled Terminology (CT) Mapping |
Another critical piece to the mapping exercise is understanding and working through the mapping of your internal lexicons or terms to Controlled Terminology. For CT basics, see the CT Fundamentals page for more information on what CT is and some general considerations.
This information will then be a vital feed into either the vendor product or your own internal design. |
Maintenance and Support Planning |
Once implemented, maintaining SEND readiness requires a certain amount of upkeep. Consider the following common post-implementation support needs:
|
Validators |
---|
Validators perform a functional check of the datasets, looking for structural integrity issues, such as missing values in Required fields, missing columns, incorrect CT, and so on. Different tools check for different issues, and most should not be considered as authoritative. That said, they are an extremely useful step in the process of creating datasets, checking for the integrity of a SEND package before it is sent out. Validators are a separate concept from GLP system validation. The use of a validator does not preclude the need for system validation or QC on the systems or processes which produce the datasets. The validator's purpose is more of a nuts-and-bolts check, intended to catch structural deviations from the specifications, such as missing values, as opposed to logical issues, such as incorrect calculations, which a system validation would cover. There is no required or preferred validator. Early on some validation rules were developed jointly by the FDA and industry. A variant of these rules are currently used by the free, open source OpenCDISC Validator tool and can be viewed through its configuration. Organizations are free to build their own validator tools to build on the rules, though, such as to validate against organization-specific data cases, provide additional checks for incoming data that are to be consumed, etc. Validation rules aside, the SENDIG provides the official rules for what comprises a SEND-compliant package, so the implementation guide takes precedence over any discrepancies between the implementation guide and validation rules. In September 2013, the FDA released a validation rule set that is used when they receive SEND datasets in a submission. This rule set is available from http://www.fda.gov/ForIndustry/DataStandards/StudyDataStandards/default.htm in the section on Study Data Validation Rules. |
Dealing With Validator Errors and Warnings |
Errors are generally problems with your datasets that may need to be fixed. Check the implementation guide to confirm whether the reported error is an issue. For example, you may get an error saying 'SE0056, SEND Required variable not found, Variables described in SEND as Required must be included', then you must check the dataset and implementation guide, make sure the Required variable must be present in the dataset and can not be null.
For example, you may get a warning in the LB domain: 'Rule ID SD0029- Standard Units (--STRESU) should not be NULL, when Character Result/Finding in Std Units (--STRESC) is provided'. Since some Clinical Pathology parameters may have no units (Specific gravity, A/G ratio, pH- etc.), this warning is innocuous and can be ignored.
|
Validation |
---|
From a regulatory systems validation standpoint, there isn't anything necessarily special about SEND datasets. What can help team members understand the validation needs is to think of the datasets as you would individual tabulations - yet another output of the system you are validating. That said, it will probably be useful to get your systems validation group on board to understand what SEND means and how it will play into the system changes in which it is implemented. This page goes into some more discussion on how to treat SEND with regard to QA/QC, the thinking and considerations of which are similar: Handling of SEND in Study Documentation. |