HANDS-FREE SPEECH-BASED NATURAL LANGUAGE PROCESSING COMPUTERIZED CLINICAL DECISION SUPPORT SYSTEM DESIGNED FOR VETERINARY PROFESSIONALS
The present invention relates to the use of a hands free, speech based, computerized natural language processing system to provide veterinary professionals clinical decision support in a clinical environment. In various occupations, the need for hands-free assistance for Clinical Decision Support (“CDS”) is a practical means for a single person to accomplish the combined task of keeping their hands focused on the specific task or activity while simultaneously accessing the information they need to complete the task or activity or make decisions regarding the task or activity. As the processing power available to devices and associated support services continues to increase, it has become practical to interact with users in new ways. In particular, it has become practical to interact with users through two-way speech dialogs, in which a user instructs a system by voice and the system responds by speech. It is now practical to develop and deploy computer software to interface with these voice-based systems in a way specifically designed to extract from the user the information required to drive algorithms to provide accurate, real-time clinical decision support. A hands-free speech-based natural language processing clinical decision support system (knowledge and patient specific data and recommendations intelligently filtered to improve patient care and medical outcomes) configured to operate in conjunction with a stationary or mobile base device speech system to receive voice commands from a user. A user may direct speech to the base device. In order to direct speech to the base device, the user first speaks a keyword. A dialog may be conducted with the user in multiple turns, where each turn comprises user speech and a computer-generated audio speech response by the speech system. In addition, or as an alternative, the system response may be rendered in text on a display for the user to view. The user speech in any given dialog turn may be provided to the base device. The system response speech in any given dialog turn may be provided from the base device. This speech system dialog model is directed by a computer program finite state engine. Rule data for the state engine is retrieved from an internet cloud database by a computer program function and is applied to the speech dialog system in order to prompt the user for specific information. This user supplied information is used as inputs to the computer program that in turn refine the speech dialog and requests for additional information. Once all information required to make the CDS recommendation have been received, the computer program applies an algorithm to generate a set of recommendations from which the user can select the best option for their patient. During the speech dialog, each user interaction is stored in two separate cloud databases. The first database stores all user responses as the user progresses through the speech dialog. The user may start and stop a dialog at any time, returning to the point of departure upon return. The second database stores the user activity and decision process information to be used to provide data for reporting and analytics. User activity may include no response over a given period of time; requests to edit previous responses; and requests to exit the system prior to completion. The activity tracking records the time of the activity in addition to the activity itself. Appendix I shows the dialog model for mapping input speech to functional intent. Appendix II shows a sample speech request to be processed through the speech service and dialog model. A speech-based system may be configured to interact with a user through speech to receive instructions from the user and to provide information services for the user. The system may have a stationary or mobile base device which has a microphone for producing audio containing user speech. The user may give instructions to the system by directing speech to the base device. Audio signals produced by the base device are provided to a speech service for automatic speech recognition (ASR) and natural language understanding (NLU) to determine and act upon user intents (e.g., instructions, responses, questions). The speech service is a combination of networked and non-networked computer programs running on a base hardware device and on an Internet distributed computer server that is configured to respond to user speech by sending data to custom computer program functions. In order to fully determine a user's intent when speaking, the system may engage in a speech dialog with the user. A dialog comprises a sequence of dialog turns. Each dialog turn comprises a user utterance and may also include a system-generated audio speech reply. The following is an example of a speech dialog that may take place between a speech-based system and a user: Turn 1: User: “Edit age.” System: “Is this patient between thirteen weeks and seven years old?” Turn 2: User: “No.” System: “Is the patient less than thirteen weeks old?” Turn 3: User: “No.” System: “This patient is greater than seven years old. How much does this patient weigh?” A speech dialog may comprise any number of turns, each of which may use collected speech input from either the base device or the handheld device and corresponding response speech output deployed through the base device or the handheld device. The base device 100 comprises a network-based or network-accessible speech interface device having one or more microphones, a speaker, and a network interface or other communications interface. The base device 100 is designed to be stationary and to operate from a fixed location, such as being placed on a stationary surface. The base device 100 may have omnidirectional microphone coverage and may be configured to produce an audio signal in response to a user utterance of a keyword. The speech base device 100 includes a speech service 102 that receives real-time audio or speech information processed by the speech base device 100 in order to recognize user speech, to determine the meanings and intents of the speech, and to interface with the computer finite state engine in fulfillment of the meanings and intents. The speech service 102 also generates and provides speech for output by the base device 100. The speech service 102 is part of a network-accessible computing platform that is maintained and accessible via the Internet. Network-accessible computing platforms such as this may be referred to using terms such as “on-demand computing”, “software as a service (SaaS)”, “platform computing”, “network-accessible platform”, “cloud services”, “data centers”, and so forth. Communications between the base device 100 and the service 102 may be implemented through various types of data communications networks, including local-area networks, wide-area networks, and/or the public Internet. Cellular and/or other wireless data communications technologies may also be used for communications. The speech service 102 may serve a large number of base devices and associated handheld devices, which may be located in the premises of many different users. In The CDS collects speech input from the networked speech device 203/100. The CDS runs the speech input through a speech service 204 and applies a dialog model 205/APPENDIX I to determine the action the user intended to implement. The dialog model APPENDIX I is used by the speech service 204 to map the user input speech to specific intents by direct word value or by associating the listed synonym of the expected word value. The dialog model 205/600 then sends the input speech data to the finite state engine in a formatted response APPENDIX II where it is processed The process flow begins with While the claimed invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the claimed invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Additionally, while various embodiments of the claimed invention have been described, it is to be understood that aspects of the invention may include only some of the described embodiments. Accordingly, the claimed invention is not to be seen as limited by the foregoing description. A hands-free speech-based natural language processing clinical decision support (CDS) system configured to operate in conjunction with a stationary or mobile base device speech system to receive voice commands from a user. A dialog may be conducted with the user in multiple turns, where each turn comprises user speech and a speech response by the speech system. The user speech in any given dialog turn may be provided from the base device. This speech system dialog is directed by a computer program finite state engine. Rule data for the state engine is retrieved from an internet cloud database by a computer program function and is applied to the speech dialog system in order to prompt the user for specific information. This user supplied information is used as inputs to the computer program that in turn refine the speech dialog and requests for additional information. Once all information required to make the CDS recommendation have been received, the computer program applies an algorithm to generate a set of recommendations from which the user can select the best option for their patient. 1. A hands-free speech-based natural language processing clinical decision support (CDS) system for use by a veterinary professional during a veterinary procedure while the user's hands are occupied or unable to access patient or pharmaceutical data without stopping the procedure, comprising: a stationary device or mobile device having a microphone and a speaker or earphone connected by wire or wirelessly; a natural language processing server programmed with computer software dialog model to interpret the raw voice data connected to said stationary device or mobile device via communications network; a patient information database with patient information connected to said stationary device or mobile device via communications network; a database with formulary rules to define protocols connected to a remote hosted computer application server via communications network; a patient session database to store patient session attributes while determining protocols connected to a remote hosted computer application server via communications network; a tracking database to track user actions and decisions connected to a remote hosted computer application server via communications network; an analytics system for analyzing user actions and decisions for audit, training and legal purposes connected to a remote hosted computer application server via communications network; interface for network interconnectivity to hospital information systems (HIS); interface for net interconnectivity to pharmaceutical inventory and purchasing to ensure availability of protocols before recommendation and to trigger ordering of pharmaceuticals when inventory runs low; a logic state rules engine functioning in parallel to the aforementioned dialog model to facilitate diagnosis and filter protocols based on patient condition and attributes through a sequential process of computer generated questions deployed through the aforementioned audio device(s) and user responses that include the ability to request the system repeat a request or edit a previously entered patient attribute; and, a networked application for document generation of a digital and/or print report for long term data retention. 2. The system of 3. The system of 4. The system of 5. The system of 6. The system of 7. The system of 8. The system of 9. The system of 10. The system of 11. The system of 12. The system of 13. The system of 14. The system of 15. The system of 16. The system of 17. The system of 18. The system of 19. The system of 20. The system of 21. The system of 22. The system of 23. The system of 24. The system of 25. The system of 26. The system of 27. The system of BACKGROUND OF THE INVENTION
Field of the Invention
Description of the Background
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION
{ “version”: “1.0”, “session”: { “new”: false, “sessionId”: “amzn1.echo-api.session.##########################################”, “application”: { “applicationId”: “amzn1.ask.skill.##########################################” }, “attributes”: { “body_condition”: 5, “breed_category”: “canine giant breed”, “editmode”: true, “analgesic”: “a pure mu opioid, such as, Hydromorphone 0.1 milligram per kilogram, intra muscular or intravenous”, “age_group”: 0, “STATE”: “BREED_STATE”, “premedication_note”: null, “breed”: “mastiff”, “continueflag”: false, “ASA”: 2, “anticipated_pain”: 5, “bookmark”: “BREED_STATE”, “editreturnpoint”: “FINAL_STATE”, “anxiety”: 4, “abnormal”: “no”, “dosingMultiplier”: 1, “species”: “dog”, “healthy”: “yes”, “breed_id”: “canine_giant_breed”, “current_pain”: 3, “inducing”: “Ketamine 1 milligram per kilogram, followed by Propofol, up to 4 milligram per kilogram, titrated to effect”, “prescribe”: true, “premedication”: null, “brachycephalic”: true }, “user”: { “userId”: “amzn1.ask.account.##########################################” } }, “context”: { “AudioPlayer”: { “playerActivity”: “IDLE” }, “Display”: { “token”: “” }, “System”: { “application”: { “applicationId”: “amzn1.ask.skill.##########################################” }, “user”: { “userId”: “amzn1.ask.account.##########################################” }, “device”: { “deviceId”: “amzn1.ask.device.##########################################”, “supportedInterfaces”: { “AudioPlayer”: { }, “Display”: { “templateVersion”: “1.0”, “markupVersion”: “1.0” } } }, “apiEndpoint”: “https://api.amazonalexa.com”, “apiAccessToken”: “##########################################” } }, “request”: { “type”: “IntentRequest”, “requestId”: “amzn1.echo-api.request.##########################################”, “timestamp”: “2018-03-11T22:35:53Z”, “locale”: “en-US”, “intent”: { “name”: “BreedIntent”, “confirmationStatus”: “NONE”, “slots”: { “species”: { “name”: “species”, “confirmationStatus”: “NONE” }, “article”: { “name”: “article”, “confirmationStatus”: “NONE” }, “breed”: { “name”: “breed”, “value”: “husky”, “resolutions”: { “resolutionsPerAuthority”: [ { “authority”: “amzn1.er-authority.echo- sdk.amzn1.ask.skill.##########################################.AMAZON.Animal”, “status”: { “code”: “ER_SUCCESS_MATCH” }, “values”: [ { “value”: { “name”: “canine northern breed”, “id”: “canine_northern_breed” } } ] } ] }, “confirmationStatus”: “NONE” } } } } } End of APPENDIX II










