Reader Question: Robert asks about study deletion

Hi, I am a PACS Admin using McKesson PACS, however, due to work demand I never got formal training so I am using the support and any manuals I can find. One of my biggest gripes is my inability to delete studies.

Deleting studies is pretty straightforward. On the server, go into a cmd window and type pacsutil. This will start the PACS Administrator Utility. In the current version, you would type 3 to go into the Study Folder Utilities menu, then 8 to go into the Deleting patient or study records submenu menu, and then you have the option of 1 to Delete a study from the database and its storage locations or 2 to Delete a patient record and all of its study records. The names are pretty self explanatory as to what you are getting yourself into, either getting rid of one study or getting rid of a patient and all their studies.

Anyway, just put in the accession number of the study (or the patient ID), put in your context ID (if you are a single site, probably 1) and voila! the studies are gone. This is an extremely useful tool, but also one where you want to pay attention, double-check your input and make sure you are doing the work for the right reasons (unlike deleting an image in the GUI, where only the reference pointer is deleted and you can restore it, this actually gets rid of the study, so if someone is trying to cover up their mistake and you delete the evidence for them, expect negative repercussions).

FYI I probably wouldn’t script this, as you may find things you didn’t expect, like duplicate patient IDs or accession numbers, and you would obviously want to catch those.

Reader Question: Landy ask about reports

Our administration has asked what reports we can run on our McKesson PACS and support doesn’t provide much info. Do you know of any sites that may be able to give us a little more information?

If you go to you web server address, and instead of /hrs or /mckessonradiology, try /administrativereports and you should be able to get a few canned reports of questionable value. I am also posting sql queries on this page that generate reports of use to me and my employer, and they may be of interest to you and yours as well. Beyond that, I strongly suggest you feed your log files into Splunk. Most of the McKesson logs are not well formatted for it, but with a bit of massaging it can provide some useful statistics; we use it to log all of our archive retrieval times so we can trend them and look for any issues. Note that support will help you with the /administrativereports site, but for any of the queries on this page, or help with Splunk, they are not going to be a resource.

Counting study volumes

We typically try to forecast how many licenses we are going to be required to purchase annually so that we can budget for the expense. We have used the storage_report and VBL scripts in the past, but neither of them give what I consider to be reliable numbers in an easy to use format. Here is a quick query made to tell you all of the studies that were added in a year for that year (one of the accuracy issues we had before is that as comparison studies were added, numbers for prior years would go up, so we were constantly throwing off our year-over-year and trend data). The results are counted and grouped by month. We just drop the numbers into Excel and can do some easy stats.

set linesize 300
set pagesize 500
select TO_CHAR(MY_CREATION_DATE_TIME, 'YYYY-MM'), count(MY_STUDY_ID)
from STUDY
where TO_CHAR(MY_CREATION_DATE_TIME, 'YYYY') = TO_CHAR(MY_EXAM_DATE_TIME, 'YYYY')
group by TO_CHAR(MY_CREATION_DATE_TIME, 'YYYY-MM')
order by TO_CHAR(MY_CREATION_DATE_TIME, 'YYYY-MM')
;

Returns:

TO_CHAR(MY_CREATION_DATE_TIME,'YYYY-MM')    COUNT(MY_STUDY_ID)
------------------------------------------- ------------------
2006-01                                                   1587
2006-02                                                   4154
2006-03                                                   4617
2006-04                                                   3857
2006-05                                                   4750
2006-06                                                   4971
2006-07                                                   4573

Create body part specific prefetch rules

We created prefetch rules based on body part because our display protocols were based on body part, but our prefetch rules were still generic and pulled based on modality and performed date. Not only does this mean that studies were not always ready to compare for our rads, but relevant studies that get ad-hoc pulled don’t drop off as prefetched studies, wasting valuable cache2 space.

Creating prefetch rules has the potential to be a huge job as some sites get really specific with body parts, so there can be a lot of changes to make. You can also go a few steps further and pull by multiple criteria if your rads are really specific about their needs. However, the steps and scripts below make the basics of this job pretty damn easy, and you could always start here to get 90% relevant pulls and then pick away at that harder stuff.

First, we needed a list of all the body parts we were using, which is easily accessible through the database (accessed through SQLPlus):

select * from body_part;

Returns:

MY_BODY_PART_ID MY_CODE                      MY_DESCRIPTION
--------------- ---------------------------- --------------------------
1               ABDOMEN                      Abdomen
2               AORTA                        Aorta
218             BONE-SURVEY                  Bone Survey
4               ANKLE-L                      Ankle, Left
5               ANKLE-R                      Ankle, Right

This gives us our headers, and I wanted to make sure that there wasn’t too much extra crap in the results. It looks fairly good, so I just want it exported and cleaned up a little:

SQL> spool body.txt;

SQL> set linesize 1000
SQL> set wrap off
SQL> set trimspool on
SQL> set trimout on
SQL> set pagesize 0
SQL> select * from body_part;

Returns:

1 ABDOMEN                               Abdomen
2 AORTA                                 Aorta
218 BONE-SURVEY                         Bone Survey
4 ANKLE-L                               Ankle, Left
5 ANKLE-R                               Ankle, Right
148 ANKLE-B                             Ankle, Bilat
227 EXTREMITY-U                         Extremity, Unknown
237 ABDOMEN-PELVIS                      Abdomen/Pelvis
9 BLADDER                               Bladder
10 BRAIN                                Brain
149 CLAVICLE-B                          Clavicle, Bilat
12 BREAST-L                             Breast, Left
13 BREAST-R                             Breast, Right
14 CALCANEUS-B                          Calcaneus, Bilat
15 CALCANEUS-L                          Calcaneus, Left
16 CALCANEUS-R                          Calcaneus, Right
17 CAROTID                              Carotid

There was a bunch more of this and then:

149 rows selected.

SQL> spool off;

According to the route.base file, the bodypartcode prefetch rules work off of the CODE field, so I could probably have just exported that one column and been done, but I usually want to take a look at the data in Excel to be confident we are not introducing problems based on the formatting, duplicates that have been marked inactive, etc. This export is pretty basic, but it still never hurts to double-check. I’ll get rid of the description and ID once I think that everything is is in order.

Once I had this in Excel, the most difficult part of this adventure began: actually figuring out which studies we wanted to prefetch for a study type. After much deliberation about Left/Right/Bilateral, and how to pull just what was appropriate, we decided to err on the side of pulling too many studies. They only last for 21 days (adjustable via flush.site) in the cache, so it isn’t a huge deal to have extra ones. So I went into Excel and reduced my list just to show the main body part designator (so where we use HAND-L, HAND-R, and HAND-B, I reduced it to just HAND). Since “HAND” is the beginning of all these side-designated parts, and that model is replicated throughout, we can use the HasPrefix operator for grouping parts. This brought us from 149 body parts to just 77, which is pretty manageable. For the combined exams (Chest/Abd/Pelvis) I made a new designator called TRUNK that will include Chest and Abdomen (Pelvis is a bit weirder becasue it would pull in a bunch of ultrasound studies if it were included, so I did not add it to the grouping (however, by grouping studies based on the starting words, it should pull in the correct studies anyway for MR and CT, since none of the multi-part names start with the word Pelvis)).

I ended up with a list that looked something like this, which I saved as C:\Scripts\bodyparts.txt:

AC-JOINTS
ANKLE
AORTA
AXILLA
BLADDER
BONE-LENGTH
BONE-SURVEY
BRAIN
BREAST
CALCANEUS
CAROTID
CLAVICLE
COCCYX
COLON
ELBOW

To do something really basic, like prefetching the last three studies for a particular body part into a cache2 location, each body part would need an entry in route.site that looks like (where %X designates the bodypart code):

%X.LABEL    install "Bodypart %X"
%X.MEMBERSHIP_RULE  install bodypartcode EQ %X
%X.prefetch.LOCATIONS   install cache2
%X.prefetch.cache2.STUDIES  install %X 1 3

Since we are doing this based on the starting characters for the body part, instead of the EQ operator, I am using HasPrefix, and I am pulling 5 relevant instead of 3 because we are being less specific on laterality:

%X.LABEL    install "Bodypart %X"
%X.MEMBERSHIP_RULE  install bodypartcode HasPrefix %X
%X.prefetch.LOCATIONS   install cache2
%X.prefetch.cache2.STUDIES  install %X 1 5

The trunk one will be a bit different, since it has two body parts with different names to group together:

TRUNK.LABEL install "Bodypart TRUNK"
TRUNK.MEMBERSHIP_RULE   install bodypartcode HasPrefix CHEST or bodypartcode HasPrefix ABDOMEN
TRUNK.prefetch.LOCATIONS    install cache2
TRUNK.prefetch.cache2.STUDIES   install TRUNK 1 5

The TRUNK setup is a one-off, so I did it by hand. Here is a PowerShell script to create the rest:

$bp = Get-Content C:\Scripts\bodyparts.txt

Foreach ($a in $bp)
{
Write-Host "$a.LABEL    install `"Bodypart $a`""
Write-Host "$a.MEMBERSHIP_RULE  install bodypartcode HasPrefix $a"
Write-Host "$a.prefetch.LOCATIONS   install cache2"
Write-Host "$a.prefetch.cache2.STUDIES  install $a 1 5"
Write-Host ""
}

To put this into your config:

  1. Copy the results of the PowerShell script into route.site
  2. Put all of the names from bodyparts.txt into the prefetch.GROUPS variable in route.site
    • Leave “default” at the end
    • Also, you cannot use slashes to break this variable list into multiple lines like you can with almost every other config file; it needs to be one long line

You may be able to enable this by just restarting the route process, but I am not sure. I put it into place during a scheduled downtime, so all processes were being restarted.

Storage Planning – Device Add/Remove Dates

While the STORAGE_REPORT output is great for figuring out a point in time for how much space you are using, storage planning needs a bit more depth. To find out how much we are growing, we need to know which devices are still sending in data, and for how long they will continue. My initial thought was that I should just ask all of our modality leads when equipment was purchased/decommissioned, but after the first attempt, I realized that information was going to be really difficult to extract. Instead, I decided to query the database for the first and last study times for each device.

My first query was:

SELECT SOURCE_ID, MAX(MY_EXAM_DATE_TIME), MIN(MY_EXAM_DATE_TIME)
FROM study 
GROUP BY SOURCE_ID;

Which returned:

 SOURCE_ID MAX(MY_EXA MIN(MY_EXA
---------- ---------- ----------
        25 11/05/2014 01/16/2006
           04/24/2015 01/01/1981
        30 04/30/2015 10/24/2005
        34 05/04/2015 11/15/1992
        51 10/24/2013 04/26/2002
        22 07/02/2008 07/21/1992
        43 09/15/2008 03/05/2007
        54 09/22/2014 09/05/2007
        83 05/01/2015 05/30/2001
        57 05/04/2015 05/08/1998
        91 05/04/2015 11/03/1994

 SOURCE_ID MAX(MY_EXA MIN(MY_EXA
---------- ---------- ----------
       129 09/10/2013 04/04/2001
       153 06/28/2013 07/29/2004
         1 01/01/2012 01/01/2012
       138 09/13/2013 12/02/2002
       245 04/27/2015 02/10/1997
       244 12/27/2012 10/18/2012
       123 09/13/2013 09/13/2013
       321 10/03/2008 10/03/2008
       380 05/04/2015 02/17/2015
        11 04/23/2011 11/29/1999
        28 05/04/2015 07/07/2004

This had a few issues: There are study dates listed from before we had a PACS system in place, the SOURCE_ID column is not very useful, and the headers are a pain to remove later on.

So I tired again using creation date, a join to the source table, and a bit of formatting:

spool maxmin.txt

set pagesize 0

SELECT source.MY_CODE, MAX(study.MY_CREATION_DATE_TIME), MIN(study.MY_CREATION_DATE_TIME)
FROM study
JOIN source
ON study.SOURCE_ID = source.MY_SOURCE_ID
GROUP BY source.MY_CODE;

spool off

Which returned:

us25                            05/04/2015 09/25/2013                           
cr06                            05/04/2015 09/04/2014                           
mr03                            05/04/2015 04/10/2015                           
nwks5                           08/08/2011 05/26/2006                           
us2                             12/21/2011 01/20/2006                           
mm3                             02/03/2012 09/29/2009                           
us8                             03/20/2015 12/09/2010                           
nwks34                          11/05/2013 05/31/2011                           
nwks60                          09/16/2013 06/08/2011                           
nwks58                          07/07/2011 07/07/2011                           
nwks53                          11/05/2013 06/09/2011                           
us20                            05/04/2015 10/10/2011                           
dr06                            05/04/2015 08/18/2014                           
us27                            05/04/2015 11/05/2014                           
nwks13                          12/05/2006 06/27/2006                           
nwks3                           06/01/2011 01/20/2006                           
nwks8                           06/06/2011 05/10/2006                           
fnode0                          04/20/2015 01/24/2006                           
mm2                             05/04/2015 01/20/2006                           
mr1                             02/16/2015 01/20/2006        

This is something I could easily put into Excel along with the numbers from STORAGE_REPORT to get a better handle on what was still using space.

I am still making tweaks here an there to the Excel workbook to figure out our 5-year plan, when that is looking good, I will post about it.

Counting imported studies

We pull a lot of studies into PACS from other facilities, either off CDs or through VPNs, and are trying to better account for those pulls in terms of license use.The difficult thing with counting imported prior studies is that there is nowhere in the GUI to see when the study was imported if the import date is different than the performed date, which is relevant because the licensing count applies to studies performed in the current license cycle only, so if you import a CD for Sally in 2014 that has studies from 2014, 2013, and 2012, the 2014 study will show in you 2014 license count, and the others are basically freebies (if the CD only has studies older than a year, there is no hit to your license count at all). This information is availably in the database for you to query.

I am assuming you know how to log in to your DB using sqlplus.

First you will need to get the facility IDs of the facilities you want to track, which you can do using this command:

select * from facility;

That will show you the facility names and IDs, which you can combine with a date range (here I am looking for everything that was both performed and imported in 2013).

SELECT count(*)
FROM study
WHERE MY_EXAM_DATE_TIME like '%2013'
AND MY_CREATION_DATE_TIME like '%2013'
AND (FACILITY_ID = 15 or FACILITY_ID = 16 or FACILITY_ID = 17 or FACILITY_ID = 18 or FACILITY_ID = 19)
;

If you need to adjust your dates, MY_EXAM_DATE_TIME is the study performed date and MY_CREATION_DATE_TIME is the date the study was imported into your system. I only needed an annual count, but more granularity could be easily achieved by altering the date filter.

Automatically mark as studies as reported

We have VPNs established with several other hospitals and clinics in the area so that we can push/pull images instead of constantly burning CDs. There are also a few devices in our clinic that send us images to store, but that our radiologists don’t read (mostly needle guidance and limited OB studies done by the OBGYNs). For a long time, our staff was using their downtime to mark these as reported, but there is a much easier and less error-prone way: just telling your worker process to mark them as reported as they are being imported. Here is the line in worker.site that marks studies from device us13 as reported:

MARK_STUDY_REPORTED_IF_MATCHING_SOURCES      install    us13

If you have several devices you want marked as reported, just list them all in the one line with a space between each name.

Switch modality on incoming studies

Sometimes a modality sends studies as a type you don’t want. For instance, our DEXA machines send as type OT, and we have a mix of CR and DX modalities which we want to all display as the same type (we went with the lowest common denominator and are converting them all to CR). Ideally, the sending device has some way to change the type it is sending as, but not always. Thankfully, PACS has a way to help. All you need to do is place the following in your dcmimport.site file.

OVERRIDE_INCOMING_MODALITY_FOR_SOURCES	INSTALL	dexa01 dr04

dexa01.OT	INSTALL		BD
dr04.DX		INSTALL		CR

If you caught this in your first few studies, it is pretty easy to fix the ones that had been sent. If a device has been sending for a few years and people never really thought to fix the problem, you have some cleanup to do. I’ll cover that later.

Add new archive space

While we technically use tape to archive for disaster recovery, the PACS system doesn’t really know about that. It only knows about our NAS archive. This is fine with me; I would dread adding tape management to my list of jobs. However, we still run into the occasional snafu, like when we ran out of archive space. We really have plenty of space, but we had broken our NAS into 4 terabyte chunks to be easier to manage, and our use had sort of gotten away from me (we bought a breast tomosynthesis machine, and oh boy does it suck space like no tomorrow). Anyway, our rads were complaining that things were slow, so I looked at the normal culprits and found nothing. It wasn’t until I ran dbdump -a and saw 1700 unarchived bags that I keyed into a possible archive problem. I then looked at the arc10dev.log (probably could have looked at arc10.log or archive.log, but this was the most recently written log when I sorted, so I opened it first) and found what I needed to see:

[ALIDeviceDrive.m,286 14:17:00.614] Will write to physical location '\\pacs-archive-02\archive6'
[alisam.c,367 14:17:00.614] alisam_write_archive_dir (dest=2012\0920\1348175820, vol=\\pacs-archive-02\archive6, src=\\nserv01\f\img\w1196988) called.
ERROR AT [alisam.c,434] ON Thu Sep 20 2012 14:17:06 IN 'ALISAMKit' INFORM 'log' TITLE 'alisam_write_archive_dir()': Free space (80 KB) on partition '\\pacs-archive-02\archive6' is not enough for source '\\nserv01\f\img\w1196988' with size of 164017628 bytes.
ERROR AT [ALIDeviceDrive.m,305] ON Thu Sep 20 2012 14:17:06 IN 'aliardevicedaf' INFORM 'log' TITLE 'ERR': Error encountered in sam_write_archive_dir() rc: -5993
[ALIArchiveInfo.m,427 14:17:06.395] Found partition name 'online_arc6' contained by volume assigned to slot #0.
[ALIDeviceDrive.m,312 14:17:06.395] MediaInfo returned 
		Partition:online_arc6
		Total Space:0
		Free Space:0
		Media Status:GOOD
		Date Full:0
[ALIDriveClientStub.m,176 14:17:06.708] ALIJukeboxClientStub: TO 'nserv2:arc10': SENT noticeOfWriteFailureFromClient:'NSERV2:arc10dev' 
	mediaInfo:
		Partition:online_arc6
		Total Space:0
		Free Space:0
		Media Status:GOOD
		Date Full:0 
	errResult:<-5993:Failed archiving bag> 
	withJobTrace:
[ALIJobTrace.m,706 14:17:06.708] JobTrace #44596 originated by nserv2:migrate 

Note that I put more in here that was necessary, mostly because that “Media Status: GOOD” line strikes me as very deceiving if one only gives a quick glance, so I wanted to include it.

I then logged on to the NAS and saw we had indeed run out of space on the drive.

Now, for the fix:

First, make sure you have enabled sharing on the disk you are adding, and have appropriately assigned rights to the MIG Users Group and the MIG Admins Group.

Then, you have a few site files to edit. The following are my edits/additions to the site files. Yours will be different based on your existing number of archives, where you files are being stored, etc. I am adding Archive Unit 11 (AU11), which is the fifth segment of on-line archive 3, and the files are being stored on the seventh NAS partition. Clear? No? I’m not surprised. You will probably want to look at your .site files and try to correlate. It should come together with a few minutes of perusal.

Add to archive.site:

LIST_OF_ARCHIVE_UNITS       install  AU1 AU2 AU6 AU7 AU8 AU9 AU10 AU11

AU11.description             install  "ON-LINE ARCHIVE"
AU11.short_name              install  "n/a"
AU11.long_name               install  "n/a"
AU11.brand_name              install  "n/a"
AU11.icon_lo                 install  default_device_lo.ico
AU11.icon_hi                 install  default_device_hi.ico
AU11.process_name            install  nserv2:arc11
AU11.read_only               install  NO
AU11.storage_class           install  online3
AU11.executable              install  aliarcontroller.exe
AU11.daily_mb_use            install  100

While you are in here, you may want to set the now full archive read_only setting to YES. Looking at it now, I may also want to edit the daily_mb_use setting as 100 is not even close to right these days.

Add to arcontroller.site:

nserv2:arc11.num_drives             install  0
nserv2:arc11.unit_name              install  ON-LINE3_5
nserv2:arc11.jukebox_executable     install  aliardevicedaf.exe
nserv2:arc11.drive_executable       install  aliardevicedaf.exe
nserv2:arc11.low_megabytes_mark     install  0
nserv2:arc11.log_backups            install  25
nserv2:arc11.timeout_refresh        install  YES

Add to ardevice.site:

DAF_SHARED_LOCATIONS    	INSTALL	online_arc1 online_arc2 online_arc3 online_arc4 online_arc5 online_arc6 online_arc7

online_arc7			INSTALL \\pacs-archive-02\archive7

nserv2:arc11dev.DAF_DEVICE_LOCATIONS		INSTALL online_arc7
nserv2:arc11dev.TAR_BEFORE_ARCHIVE		INSTALL NO
nserv2:arc11dev.USE_TEMP_DIR_WHEN_TARRING	INSTALL NO
nserv2:arc11dev.USE_TEMP_DIR_WHEN_UNTARRING	INSTALL NO
nserv2:arc11dev.CHECK_FREE_SPACE		INSTALL YES

Lastly, you will need to bounce the archive and migrate services (might not be a bad idea if this has been backed up for a while to bounce the whole server).

It still takes a while to get everything back up to speed, as you need to catch up with archiving from whenever you ran out of space, but at least you are on the mend.

PowerScribe 360 and McKesson Integration

I first need to mention that if you have never had the Report Connector API (it is also documented as an SDK, though I think it is more properly an API) then you will need to talk to McKesson about purchasing and getting this installed. If you keep track of the EXPs and PTFs that are installed on your systems, the Report Connector is EXP-HRSPlugin11-646. It will need to be installed on any systems that integrate HRS-A and PowerScribe 360 (see the post How to install an EXP or PTF if you need help). Also, our clinic is pretty straightforward in how we are configured; we have one site/context, and don’t need any extra data to come over from PACS for our reports. If you have additional needs, you may want to just have McKesson do this for you.

We have a few decisions to make before going ahead:

  1. Where are your XML files going to be dropped?
    1. Originally, I had Powerscribe set up a new folder at C:\Nuance that would be used, but we already had Digisonics dropping into C:\XMLIntegrations, so I thought “Why not reuse, reduce, recycle?” and decided to put two subfolders into C:\XMLIntegrations, one “Nuance” and the other “Digisync” and will drop the XMLs in the repective folders for each integration. As we move slowly toward version 12, and might need to integrate other third-party apps, I’ll probably be happy that my C:\ root is not filled with a boatload of miscellaneous XML drop folders.
  2. What am I going to call this integration?
    1. I went with the classic “PowerScribe,” but you may want to use “PS360,” “Dictaphone,” or some other lingo that suits your site. Just replace as needed

First, we will cover what needs to happen in PowerScribe:

  1. Set up the PACS system
    1. Go to the Setup>Sites page.
    2. Under the PACS option, add the “Horizon Med Imaging” type and give it a name and description.
    3. Choose “Slave” mode and put in C:\XMLIntegrations\Nuance;multisite=all
    4. Apply the changes in the PACS section and Save Changes on the Sites window.
    5. Take note of the Site name(s) as we will be doing some translation on these later
  2. Under Setup>System>Preference>Security, make sure that “Allow null password via automation” is enabled. It doesn’t have to be, but it will save your rads from logging in each time the application is launched and may save some calls if they forget.
    1. Note that the system will autolaunch PS360 when you start HRS-A, but it does not log in at this startup. It send the username and logs in the first time you click to dictate a report. This creates a slight delay for the first report creation, but loads the system faster, so it is sort of a wash. It is a win for anyone logging into the system and not planning to report (or not having a reporting ID) as there is no error or extra wait time for login.

Now, for the McKesson side:

  1. Update the OAF.site file
    1. Add PowerScribe to your ListOfApps setting
    2. You will need to generate three UIDs, which I described in an earlier post (here)
    3. Here is what we ended up with:
    4. PowerScribe.Launch.Adapter install "McKSdkXmlInterfaceOAFAdapter.McKSdkXmlInterfaceAdapter"
      PowerScribe.Launch.AllowedSessionTypes install ALL
      PowerScribe.Launch.Arguments install PowerScribe
      PowerScribe.Launch.SupportedModalities install ALL
      PowerScribe.Launch.Activities install ANY
      PowerScribe.Launch.UniqueID install "{8B8F51CF-3E9C-4FB3-92EB-A0BA37919DA1}"
      PowerScribe.Description.IconFilePrefix install "$ALI_SYS_DATA_PATH\OAF\CommonFiles\Icons\SDK-Dictate"
      PowerScribe.Description.IconLabel install "PowerScribe"
      PowerScribe.Description.Tooltip install "PowerScribe"
      PowerScribe.Description.MenuLabel install "PowerScribe"
      PowerScribe.AutoLaunch install Yes
      
      PowerScribe.Shortcuts install Anchor Active
      PowerScribe.Shortcuts.Anchor.FunctionId install "{A242D41E-F22F-4645-A262-A323ED957A70}"
      PowerScribe.Shortcuts.Anchor.Category install Workflow
      PowerScribe.Shortcuts.Anchor.Name install "Dictate on anchor study with PowerScribe"
      PowerScribe.Shortcuts.Anchor.Description install "Dictate on anchor study with PowerScribe"
      PowerScribe.Shortcuts.Active.FunctionId install "{B4F38AD3-47DE-4B50-8F13-7A104B5622A8}"
      PowerScribe.Shortcuts.Active.Category install Workflow
      PowerScribe.Shortcuts.Active.Name install "Dictate on active study with PowerScribe"
      PowerScribe.Shortcuts.Active.Description install "Dictate on active study with PowerScribe"
      
  2. Copy SDKXMLFileIntegration.base to PowerScribe.base
    1. This file contains all of the settings you might need, and helpful instruction regarding their use
  3. Create PowerScribe.site
    1. I did this by copying the one from our previous Digisonics integration, Digisync.site, and renaming it to PowerScribe.site
    2. The Digisync integration had a bunch of SDXS import fields that were unnecessary for this integration because PS360 has orders coming into an HL7 interface, the same as PACS, whereas Digisonics is relying on order information from PACS or the ultrasound to populate its reports. I deleted all of those fields to clean up the file. I also needed to add in the Response file settings, as this was a bidirectional link instead of the unidirectional used by Digisonics.
    3. The ExePath variable will not be the same as the shortcut on your desktop. The integration needs a .bat or .exe file to launch, while the shortcut points to an application manifest file. Nuance includes a .bat file in the same folder as the manifest just for this reason.
    4. The most confusing part of the link is the MapOfAssigningAuthorities setting. This links the Context names in PACS to the Site names in PS360. For us, the 360 site name is IMAGING, while the PACS context was left at DEFAULT. Presumably we could change the default PACS context to be called IMAGING, but I was hoping not to break too much in one fell swoop. If you are in a multiple RIS or multiple PACS environment, you may need more than one mapping, or you may have already matched the correct PS360 Site names with the context names, having come across this problem with other integrations in the past.
    5. The DictateOnAnchorUniqueID and DictateOnActiveUniqueID are matched to the corresponding lines in OAF.site.
    6. Here is what we ended up with:
    7. ReportingProductName Install "Nuance PowerScribe 360"
      ReportingProductVersion Install "1.1"
      ReportingVendorName Install "Nuance Inc."
      PreStartReportingApp Install "Yes"
      ExePath Install "\\rad-ps-sql-01\PowerScribe360Publish\PowerScribe360.bat"
      AllowMultipleAccession Install "No"
      SupportContextSwitch Install "No"
      UniqueTagForAssociatedStudies Install "No"
      EncryptionAlgorithm Install "None"
      CommunicationDirection Install "BI"
      TriggerPath Install "C:\XMLIntegrations\Nuance"
      TriggerFileName Install "Trigger.xml"
      ResponsePath Install "C:\XMLIntegrations\Nuance"
      ResponseFileName Install "Response.xml"
      MapOfAssigningAuthorities Install A1
      A1.Code Install "DEFAULT"
      A1.ExternalID Install "IMAGING"
      CloseWindowCaptions install "Nuance PowerScribe 360"
      DictateOnAnchorUniqueId Install "{A242D41E-F22F-4645-A262-A323ED957A70}"
      DictateOnActiveUniqueId Install "{B4F38AD3-47DE-4B50-8F13-7A104B5622A8}"
      
    8. Create folders for XML drop
      1. These are the folders defined by the TriggerPath variable above, and referenced in the PACS slave settings in the PS360 Admin Portal

For each user:

  1. Preferences>Main Tool Bar will now contain a new option called “PowerScribe” that they will need to move from Available to Display and sort appropriately

Troubleshooting

  1. The following were very helpful in figuring out problems along the way:
    1. C:\ali\site\log\MckSdkReportingIntegration.log
    2. C:\Program Files\Nuance\PowerScribe360 Integration Component\XMLGen.exe
      1. XML file creator that lets you see the contents of files that are going to be dropped.
      2. This folder also contains TestRadWhereAx.exe and TestRadWhereCOM.exe that can show you additional bits of info that might be put into your files.
  2. Here is a sample Response.xml file:
  3. <FinishedReport>
         <UserID>username</UserID>
         <ContractVersion>2.0</ContractVersion>
         <AssigningAuthority>IMAGING</AssigningAuthority>
         <AccessionAnchor>15803334</AccessionAnchor>
         <ResponseStatus>Reported</ResponseStatus>
    </FinishedReport>
    
    1. Note the <AssigningAuthority> field. If you sign a report in PowerScribe and it does not close the report in PACS, it may be because the MapOfAssigningAuthorities setting in PowerScribe.site is not set properly. Look at the Ordering System page in the PACS Admin Utility to see how you need to map these fileds. There will be an error regarding Contexts in MckSdkReportingIntegration.log
  4. Here is a sample Trigger.xml file
  5. <NewReport>
         <UserID>username</UserID>
         <AccessionAnchor>123456</AccessionAnchor>
    </NewReport>
    
    1. As long as you don’t need to send any special fields, this is enough to log you in and create a new report for that accession number