How mHealth is changing Africa
Picture courtesy of CNN.com
Friday, November 30, 2012
DICOM and challenges it faces in the ever changing healthcare IT environment
In this writing I will explain the Digital Image Communications in Medicine (DICOM) standards. I will consider the current and proposed changes in informatics information infrastructure, the challenges they may have on DICOM standards and why.
DICOM is made up of a long list of standards. According to Merge Healthcare (n.d.), these standards are:
Conformance – These are statements written to allow a network administrator to plan or coordinate a network of DICOM applications.
Information Object Definitions and Service Class Specifications – This standard defines the types of services and information exchanged using DICOM.
Data Structures and Encoding & Data Dictionary – This is how commands and data should be encoded for interpretation.
Message Exchange – This is the standard by which two DICOM applications mutually agree upon the services they will perform over the network.
Network Communication Support for Message Exchange – How message will be exchanged using TCP/IP and OSI.
Common Media Storage Functions for Data Interchange – This is the DICOM model for storage of images on removable media.
Media Storage Application Profiles & Media Formats and Physical Media for Data Interchange – Specifications on physical storage media and details of the characteristics of various physical medium and media formats.
Grayscale Standard Display Function – Display function for display of grayscale images.
Security Profiles – Secure network transfers and secure media.
DICOM Content Mapping Resource – Defines templates and context groups used elsewhere in the standard.
Explanatory information – Consolidates informative information previously contained in other parts of the standard.
Web Access to DICOM Persistent Objects (WADO) – Specifies a web-based service for accessing and presenting DICOM persistent objects.
Proposed changes in the US health informatics infrastructure include the adoption of health information exchanges and this can present a problem with DICOM and multi-site use. According to Langer & Bartholmai (2011), one challenge of multi-site interoperability is “the magnitude of data produced by imaging systems and unstructured text reports.” They go on to say that this is because this giant amount of information “has made it very difficult to share results among sites until the wide availability of both broadband networks and universal protocols.” This is definitely and challenge to image transmission.
Another transmission challenge is the use of XML, which is utilized by HL7 V3.0. One example is caBIG, architecture (Langer & Bartholmai, 2011). Langer & Bartholmai mention that a “vast majority of PACS and imaging modalities will require intervening computers to broker the DICOM to caBIG translation.
Another challenge is storage. Steve Langer (2011) writes, “images from different vintage equipment will encompass different levels of the standard.” What this means is that there can be ambiguity in what data elements mean, based on time periods when it was created and the location of data elements if its not in a standard location (Langer, 2011).
There are several challenges that DICOM face. These deal with image transmission because of large data sets and lack of utilization between standards. Data storage can also be an issue because of legacy definitions.
References:
Langer, S. (2011 November 12). A Flexible Database Architecture for Mining DICOM Object: the DICOM Data Warehouse. Journal of Digital Imaging, 25, pp. 206-212. DOI: 10.1007/s10278-011-9434-6
Langer, S. & Bartholmai, B. (2011 February). Imaging Informatics: Challenges in Multi-site Imaging Trials. Journal of Digital Imaging, 24(1), pp. 151-159. DOI: 10.1007/s10278-010-9282-9
Merge Healthcare. (n.d.). The DICOM Standard. Retrieved from http://estore.merge.com/mergecom3/resources/dicom/java_doc/javadoc/doc-files/dicom_intro.html
Sunday, November 18, 2012
Data Modeling: A Crash Course
The healthcare industry is making great steps towards using IT to provide better healthcare outcomes, increase patient safety and attempt to reduce costs. Behind every one of these initiatives should be a well thought out IT system. The backbones of these systems are databases. But how does a HCO or software company start out creating this? It all begins with system analysis and design and this cannot be done without data modelling. Here is your crash course to data modelling and why it is important.
Data Modeling
In the data modelling process there are three items that must be completed for success. These are Conceptual Data Models, Logical Data Models and Physical Data Models. In this writing I will explain what data modelling is briefly, what each of these models are and why they are important to the development phase of a software system. I will also be using an example of new patient check-in system for Dr. Model’s office that allows patients to enter demographic data upon arrival.
First, what is data modelling and why is it important? Data modeling is, according to Margret Rouse (2010), “…the formalization and documentation of existing processes and events that occur during application software design and development.” When data modelling analysts will use tools and techniques to translate the complexities of a system into easily understandable data-flows and processes. These are used as the basis for construction of a database system or the re-engineering of one (Rouse, 2010). The items that come out of this exercise show the data at various levels of granularity. Also, by having well-documented models, stakeholders can eliminate errors before coding a new system (Rouse, 2010).
Conceptual Data Model (CDM)
This is the highest level of data modelling CDM is a vital first step when it comes to systems analysis and design. Pete Stiglich (2010) explains that by creating a CDM “key business entities or objects and their relationships are identified.” For example at Dr. Model’s office, the current system of collecting information would be to let the patient fill out a form. The designers now create a CDM, that shows the various entities that will be needed, such as a patient table, insurance company table, date table, and there could be others. The model will then show which entities are related to one another. So for example the patient table will have relationships to all the other tables. It would be good to note that at this point this is a very high concept level showing what tables are related. When creating the CDM, stakeholders will inevitably find many-to-many relationships, but these will be addressed in the logical data model (Stiglich, 2008).
Logical Data Model (LDM)
Physical Data Model (PDM)
Exforsys Inc. (2007) defines this data model as “the design of data while also taking into account both the constraints and facilities of a particular database management system.” This is where analysts take the LDM and make sure that all the pieces of data are configured based on the environment. To further explain, as in our Dr. Model case and the patient table, now we define the patient first name as a variable character type field (varchar) and state how many characters it can contain. The age field will be defined as an integer data type. Knowing this information from the PDM can be useful in estimating storage needs (Exforsys Inc, 2007). Analysts must also take note that this model may be different based on the type of database management system that will be used (Oracle, MS SQL Server, MySQL, etc.) (1keydata, n.d.).
In conclusion the end result of an application is reliant on a strong data modelling process. This process will build the foundation for the database and without it many issues can occur. It starts off as a very high level CDM, fills in the blanks with the LDM and then makes sure that it will all fit in a neat package with the PDM. Any HCO needs to rely on this modelling process so that it can maintain quality data for the foreseeable future.
References:
1keydata. (n.d.). Logical Data Model. Retrieved from http://www.1keydata.com/datawarehousing/logical-data-model.html
1keydata. (n.d.). Physical Data Model. Retrieved from http://www.1keydata.com/datawarehousing/physical-data-model.html
Chapple, M. (n.d.). Database Normalization Basics. About.com. Retrieved from http://databases.about.com/od/specificproducts/a/normalization.htm
Exforsys Inc. (2007 April 23). Physical Data Models. Retrieved from http://www.exforsys.com/tutorials/data-modeling/physical-data-models.html
Rouse, M. (2010 August). Data modeling. SearchDataManagement. Retrieved from http://searchdatamanagement.techtarget.com/definition/data-modeling
Stiglich, P. (2008 November). So You Think You Don’t Need A Conceptual Data Model. EIMInstitute.org, 2(7). Retrieved from http://www.eiminstitute.org/library/eimi-archives/volume-2-issue-7-november-2008-edition/so-you-think-you-don2019t-need-a-conceptual-data-model
Wednesday, November 14, 2012
How important are metadata and data dictionaries?
How important are metadata and data dictionaries?
In data warehousing and data storage, metadata and the use of a data dictionary is extremely important. Metadata in layman’s terms is basically data about data. It explains how the data was created, when it was created and the type of data it is. Staudt, Vaduva & Vetterli (n.d.) state in their paper that when working with the complexity of building, using and maintaining a data warehouse, metadata is indispensible, because it is used by other components or even directly by humans to achieve particular tasks (Staudt, Vaduva & Vetterli, n.d.).
Metadata can be used in three different ways:
- Passively – documents the structure, development process and use of the data warehouse system. (Staudt, Vaduva & Vetterli, n.d.)
- Actively – Used in data warehouse processes that are “metadata driven.” (Staudt, Vaduva & Vetterli, n.d.)
- Semi-actively – Stored as static information to be read by other software components. (Staudt, Vaduva & Vetterli, n.d.)
So as you can see, the use of metadata is important, not only to store information about the data, but it is also used in processes by the data warehouse and by other applications. Metadata also improves on data quality by providing consistency, completeness, accuracy, timeliness and precision. This is because it provides information on the creation time, and author of the data, the source and the meaning of the data when it was created. (Staudt, Vaduva & Vetterli, n.d.).
Regarding the data dictionary, this reminds me of when I would query the database at a past position of mine. Because there was no data dictionary, it was hard to manually decipher the relevance of the data that I was searching for and where it was stored. Because of this, I did not always bring back the correct fields necessary to complete my work and this caused devalued use of time. On AHIMA’s website Clark, Demster & Solberg (2012) prepared an article about the use of a data dictionaries and how they can be used to improve data quality.
- Avoid inconsistent naming conventions
- Avoid inconsistent definitions
- Avoid varying lengths of fields
- Avoid varied element values (Clark, Demster & Solberg, 2012)
By using the data dictionary, there is a consistency created in the data, which in turn improves data quality.
In conclusion, both metadata and data dictionaries are vital to creating consistent data. This data can be tracked and can be used to create interoperable processes between the data warehouse and other applications. Without these, architects are taking a chance and increasing their opportunities for use of less quality data.
References:
AHIMA. (2012 January). Managing a Data Dictionary. Journal of AHIMA 83(1),pp. 48-52. Retrieved from http://library.ahima.org/xpedio/groups/public/documents/ahima/bok1_049331.hcsp?dDocName=bok1_049331
Staudt, M., Vaduva, A. & Vetterli, T. (n.d.). The Role of Metadata for Data Warehousing. Retrieved from http://www.informatik.uni-jena.de/dbis/lehre/ss2005/sem_dwh/lit/SVV99.pdf
Monday, November 12, 2012
eHealth, Patient-Interaction and Data Quality
eHealth, Patient-Interaction and Data Quality
Facebook, Instagram, Twitter, Pinterest. Have you ever heard of these? Of course you have. Over the last several years, consumer
interaction with computers has soared.
People are now shopping, video chatting, expressing themselves, posting
pictures and conducting business using computers, and this is just the tip of
the iceberg in terms of how the computer is being used to interact with the
world. And the computer is not the only
means. Using smartphones, the masses can
now do all of this on the go, creating a never-ending flow of information. What is this information made of? Data!
It is only natural that the healthcare industry, now
venturing into the IT world, would want to jump on the train to ride the
information highway. With all the human
computer interactions going on, and in the lay community, it only makes sense
to find a way to have patients interact with their own health data.
Hesse & Shneiderman (2007) wrote the following in the
abstract of their article.
“New advances in
eHealth are prompting developers to ask “what can people do?” How can eHealth
take part in national goals for healthcare reform to empower relationships
between healthcare professionals and patients, healthcare teams and families,
and hospitals and communities to improve health equitably throughout the
population?” (Hesse & Shneiderman, 2007)
The data that is used by patients and providers goes beyond
the one-to-one relationship between them.
This is the value that the emerging area of eHealth brings. The physician is a “microunit” of a larger
systems, which includes the care delivery team (nurses, office staff, etc) and
the patient is also a “microunit”, surrounded by family & friends, and the
community (Hesse & Shneiderman, 2007).
One way that patients can help improve quality of data is by
looking up information on their own, and through a provider-sponsored portal
(Geissbühler, 2012). This will help them
in “assisting” the provider with pinpointing their issues, while having a
reliable source of information.
Another emerging trend are “patient-controlled health
information exchanges” (Geissbühler, 2012).
These are ways that a patient can control access to their health
information, federating documents from several providers or organizations and
even provide their own contributions (Geissbühler, 2012).
A third way patients can interface with their health
information is by using their “’digital proxy’, a mobile, always-on,
permanently connected, and context-aware device such as a smartphone.”
(Geissbühler, 2012). Also homes can be
made intelligent, and they can be made aware of the needs of their inhabitants
(Geissbühler, 2012).
Some of this may seem science fiction, but this is the way
the health industry is moving. People are
getting more comfortable with sharing information online in the social
networking world. The use of smartphones
is proliferating and can be a valuable asset.
All of this information sharing, and allowing patients to control their
own data will help to increase data quality on a whole. This is good not only for the patient, but
also for the providers and the communities that the patients live in.
References:
Geissbühler, A. (2012
June 2012). eHealth: easing the
transition in healthcare. Swiss Medical
Weekly, 142. doi:10.4414/smw.2012.13599
Hesse, B. W. & Shneiderman, B. (2007 May).
eHealth research from the user’s perspective. American
Journal of Preventive Medicine, 32(5), pp. S97 – 103. DOI: 10.1016/j.amepre.2007.01.019
Monday, November 5, 2012
Keeping Data Safe
In order to
protect information systems and data, a best practice is for organizations to
develop and maintain a data security program. Examine the essential
elements of a health care information security program and why each element
identified is essential. Be sure to examine both technology and human
factors.
Columbia, SC, October 30, 2012, “As many as 657,000 S.C.
businesses had their tax information stolen in the massive security breach at
the state Department of Revenue…” (Shain, A., 2012). October 2012, “Hackers were able to breach
more than 60 Barnes & Noble (BKS) stores, including locations in New York
City, Miami, San Diego and Chicago, and obtain credit card information…”
(Graziano, D., 2012). Again in October
of 2012 a Vermont credit union accidently threw away two backup tapes, which
could affect up to 85,000 individuals (Walker, D., 2012). October of 2012 was a busy month for data
breaches. The breaches highlighted above
have nothing to do with the health care industry, but as the proliferation of
EMRs, EHRs, PHRs, mHealth, wireless technology, electronic claims processing
and HIEs continue, HCOs will need to remain vigilant in protecting PHI.
In 2009 Clifton Phua, wrote an article about computer fraud
and security. In it he noted that 81% of
security breaches were from malicious outsiders, 17% from malicious insiders
and 2% from unintentional insiders (Phua, C., 2009). This means that anyone housing sensitive data
must take measures to lock those who would want to get into the systems out, and
to make sure those who have appropriate access do not intentionally or
unintentionally disseminate protected information.
There are several ways to assist in ensuring that data will
not be breached. The first is on the
technology side. These include
firewalls, intrusion detectors and robust anti-virus protection (Phua, C.,
2009). Firewalls stop intruders from
getting into your private network through security rules and other
measures. There are two types, software
firewalls and hardware firewalls. If by
chance someone does get in an intrusion detector will send alerts and
appropriate measures can be taken.
Lastly anti-virus will stop malicious software from getting on the
network, creating back doors for intruders.
On the human side one, HCOs should implement data handling
policies (Phua, C., 2009). Some items in
a policy like this could be:
· Shredding paper and physically destroying hard drives
· Reviewing the what, where and how’s of data
· Background checks on employees
· Auditing employees access to data
· Data encryption on laptops and other portable devices, to protect information in the case of theft (Phua, C., 2009)
· Using strong passwords
And the list goes on.
Another check that can be put in place is the use of thin
clients. These are machines that access
data in a client-server fashion. The
machines can also have copy and paste, USB drives and other ways to export data
disabled (Phua, C., 2009).
In conclusion, when measures are taken to protect PHI, both
the use of technology and ways to recuperate from human error must be taken
into account. Attacks can come from the
inside as well as the outside and breaches can be intentional and unintentional. A wise plan takes all of this into account
and sets up a roadmap towards a more secure infrastructure.
References:
Graziano, D. (2012
Oct 24). HACKERS STEAL CREDIT CARD
INFORMATION FROM 63 BARNES & NOBLE STORES.
Retrieved from http://bgr.com/2012/10/24/barnes-noble-security-breach-credit-card-information/
Phua, C. (2009 Jan
01). Protecting organisations from
personal data breaches. Computer fraud & security, 2009(1),
13 - 18. DOI: 10.1016/S1361-3723(09)70011-9
Shain, A. (2012 Nov
01). Data security breach expands to
657,000 S.C. businesses. Retrieved from
http://www.mcclatchydc.com/2012/11/01/173313/data-security-breach-expands-to.html
Walker, D. (2012 Oct
26). Vermont credit union discards
unencrypted data of 85,000. Retrieved
from http://www.scmagazine.com/vermont-credit-union-discards-unencrypted-data-of-85000/article/265522/
Subscribe to:
Posts (Atom)