Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR TROUBLESHOOTING NETWORK NODE FAULT
Document Type and Number:
WIPO Patent Application WO/2018/196953
Kind Code:
A1
Abstract:
Methods and network devices employed in troubleshooting of network nodes automatically update a workflow thereof, based on captured audio content.

Inventors:
KARAPANTELAKIS ATHANASIOS (SE)
VULGARAKIS FELJAN ANETA (SE)
KARLSSON ERLENDUR (SE)
Application Number:
PCT/EP2017/059754
Publication Date:
November 01, 2018
Filing Date:
April 25, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
G10L15/22; G06F11/36; G06Q10/00
Foreign References:
US20030078782A12003-04-24
US20130218783A12013-08-22
US9401993B12016-07-26
US7003085B12006-02-21
US20060197973A12006-09-07
US20070100782A12007-05-03
US20060233310A12006-10-19
Other References:
B. I. ELE ET AL.: "Development of an Intelligent Car Engine Fault Troubleshooting System(CEFTS", WEST AFRICAN JOURNAL OF INDUSTRIAL & ACADEMIC RESEARCH, vol. 16, no. 1, December 2016 (2016-12-01)
Attorney, Agent or Firm:
ERICSSON (SE)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method (500) for troubleshooting a fault at a network node using a workflow, the method comprising:

capturing (S510) audio content while at least one step of the workflow is executed; and

causing (S520) an update of the workflow based on the captured audio content.

2. The method of claim 1 , further comprising:

supplying spatiotemporal conditions information related to the fault for the update.

3. The method of claim 1 or 2, further comprising:

triggering Automated Speech Recognition, ASR relative to the captured audio content to generate an ASR output; and

initiating text processing of the ASR output.

4. The method of claim 3, wherein a user login occurs prior to

troubleshooting the fault, and the method further comprises:

establishing at least one user parameter during the user login, the at least one parameter being used for the ASR and/or the text processing.

5. The method of claim 4, wherein the at least one user parameter includes a user's language.

6. The method of claim 4 or 5, wherein the at least one user parameter includes a user's expertise level. 7. The method of any of claims 1 to 6, further comprising:

acquiring video content and/or images to be used for the update.

8. The method of claim 7, further comprising:

triggering pattern recognition relative to the video content and/or images.

9. The method of any of claims 1 to 8, further comprising:

submitting the update of the workflow to be stored.

10. A network node (600) comprising a transceiver (610, 620), a user interface (635) and a processor (630) configured

to control the user interface to capture audio content while at least one step of a workflow for troubleshooting a fault is executed; and

to cause an update of the workflow based on the captured audio content. 1 1 . The network node of claim 10, wherein the processor is further configured to supply spatiotemporal conditions information related to the fault for the update.

12. The network node of claim 10 or 1 1 , wherein the processor is further configured:

to trigger Automated Speech Recognition, ASR relative to the captured audio content to generate an ASR output; and

to initiate text processing of the ASR output.

13. The network node of claim 12, wherein the processor is further configured to establish at least one user parameter during a user login performed via the user interface prior to troubleshooting the fault, the at least one parameter being used for the ASR and/or the text processing.

14. The network node of claim 13, wherein the at least one user parameter includes a user's language. 15. The network node of claim 13, wherein the at least one user parameter includes a user's expertise level.

16. The network node of any of claims 10 to 15, wherein the processor is further configured to control the user interface to acquire video content and/or images to be used for the update.

17. The network node of claim 16, wherein the processor is further configured trigger pattern recognition relative to the video content and/or images.

18. The network node of any of claims 9 to 17, wherein the processor is further configured to submit the update of the workflow to be stored. 19. The network node of any of claims 10 to 18, further comprising a memory

(640) configured to store executable codes which, when executed by the processor make the processor to at least control the user interface to capture the audio content.

20. A network communication system (700), comprising:

a network node (710) having a transceiver, a user interface and a network node processor configured to control the user interface to capture audio content while at least one step of a workflow for troubleshooting a fault is executed, and to cause an update of the workflow based on the captured audio content; and

a network device (720) having a network communication interface, a processor and a memory, wherein the network node processor controls the transceiver to communicate with the processor of the network device via the network communication interface so that the processor of the network device to perform at least one of:

generate the update of the workflow based on knowledge obtained from the captured audio content;

store the update of the workflow in the memory; and

perform Automated Speech Recognition, ASR relative to the captured audio content to generate an ASR output, and/or text processing of the ASR output.

21 . A computer program comprising instructions which, when executed by at least one processor carry out the method according to any of claims 1 to 9.

22. A network device (800) comprising:

a transceiver module (810) configured to enable communication with other devices in a network; and

a control module (820) configured to update a workflow designed for

troubleshooting a fault, based on an audio content captured at a network node where at least one step of the workflow is executed and received via the transceiver module.

Description:
METHOD AND SYSTEM FOR TROUBLESHOOTING

NETWORK NODE FAULT TECHNICAL FIELD

[0001 ] Embodiments of the subject matter disclosed herein generally relate to methods and network devices employed in troubleshooting of network nodes, automatically updating a workflow of the troubleshooting using audio analytics and natural-language processing assistance, speech to text, text to speech, and

multimedia-aided information exchange. BACKGROUND

[0002] When troubleshooting and recovering network nodes, local technical support personnel relies on fault resolution workflows (henceforth referred to simply as workflows). These workflows typically involve several steps, and some steps have conditional if-then-else scenarios, which branch the original workflow to different sequences of steps. Significant progress has been made relative to the workflows by at least partially automating workflow (see, e.g., U.S. Patent Application Publication Nos. 2007/0100782 and 2006/0233310). Such improvements include a broad range of tools, from use of simple automation scripts to intelligent systems that automatically

troubleshoot and recover the system. An example of simple fault resolution workflow by running a shell script is recovery of a website from a backup, with the website being hosted on a network node running the Drupal Content Management System (see, e.g., Drupal's online documentation). An example of a more complex workflow using artificial intelligence is employed in troubleshooting car engine faults (see the article, "Development of an Intelligent Car Engine Fault Troubleshooting System(CEFTS)," by B. I. Ele et al., published in the West African Journal of Industrial & Academic

Research, Vol.16, No.1 , December 2016).

[0003] Failure to achieve resolution of the fault may still arise in spite of partial or complete workflow automatization if operations beyond the workflow to ultimately resolve the fault are not sufficiently documented, are superficially documented, or if any reference to the fault situation is logged at all.

[0004] For example, consider the following workflow related to overheating of a network device (radio dot, RD) signaled by an alarm monitored in an operation and maintenance interface. If the temperature is exceptionally high, then the workflow to mitigate the problem consists of the following steps:

1. Check that the cooling system is operating normally while waiting for the alarm to clear.

2. If the alarm remains, check the thermal operating conditions of the RD.

3. Check to see if the alarm has ceased.

4. If the alarm remains, check that the RD is installed as described in Install Radio Dot.

5. Check to see if the alarm has ceased.

6. If the alarm reoccurs, replace the RD as described in Replace Radio Dot.

7. Check to see if the alarm has ceased.

8. If the alarm remains, consult the next level of maintenance support.

[0005] Steps, 1 , 2, 4, 6, 8 in this workflow require manual intervention, i.e., the steps must be manually initiated, and their completion is indicated by an engineer on- site, and steps 3, 5, 7 can be automated (e.g., via scripts). The fault resolution process relies on the expertise of local support personnel, either via local conversation (e.g., with another support engineer on-site) or via conversation with remote so-called "second line" support, for progress toward resolution of the fault. Four major practical problems related to this approach have been identified.

[0006] First, these conversations, which may contain important information with regard to details on how the fault is resolved, are rarely formally captured to their full extent in written form. The level of detail when documenting the fault resolution process in a troubleshooting report is up to the discretion of the engineers who resolved it.

[0007] Second, even if the resolution process is documented properly, the turnaround time for it to become available with the workflow is significant because the troubleshooting report has to be validated (read) by a different team of engineers and then integrated into the next version of the workflow.

[0008] Further, the fault resolution process may be related to certain

spatiotemporal conditions, which may apply only to the specific geographical location and/or the specific time when the fault occurred.

[0009] Last, but not least, workflows are usually presented to the trouble-shooter as a document with text and figures. In many instances, the troubleshooter would be much better served by a more interactive interface to the workflows that enable the troubleshooter to ask questions regarding the workflow and get answers through an audio interface. Alternately or additionally, it is desirable for the troubleshooter to obtain video instructions on the more challenging portions of the workflow. [0010] In view of the above-identified drawbacks of conventional workflow usage, it has become apparent that improvements are desirable to better capture the

experience that may be gained from fault resolution using a workflow. SUMMARY

[0011] In order to more efficiently capture the knowledge gained while

troubleshooting a network node fault using a workflow, in various embodiments, audio content is captured and processed to be promptly integrated into an updated workflow. Thus, the amount of knowledge related to fault resolutions is increased by capturing knowledge contained in expert conversations, which may be extracted using Automated Speech Recognition (ASR). Updated workflows are versions of workflows augmented with the captured knowledge, and may also include specific spatiotemporal conditions associated with a fault resolution. The updated workflows become immediately available for the workflow's subsequent users.

[0012] The embodiments improve the troubleshooting process of network nodes by speeding up the validation process of workflow steps, automatically capturing and adding knowledge (or even steps) to the workflow based on captured audio content, i.e., conversations of the engineers involved in a troubleshooting process, and, optionally, adding contextual information to the workflow, for example, specific spatiotemporal conditions, that may speed up resolution of later similar problems in the same context.

[0013] According to an embodiment, there is a method for troubleshooting a fault at a network node using a workflow. The method includes capturing audio content while at least one step of the workflow is executed, and causing an update of the workflow based on the captured audio content.

[0014] According to another embodiment there is a network node having a transceiver, a user interface and a processor. The processor is configured to control the user interface to capture audio content while at least one step of a workflow for troubleshooting a fault is executed, and to cause an update of the workflow based on the captured audio content.

[0015] According to another embodiment there is a network communication system including a network node and a network device. The network node has a transceiver, a user interface and a network node processor configured to control the user interface to capture audio content while at least one step of a workflow for troubleshooting a fault is executed, and to cause an update of the workflow based on the captured audio content. The network device has a network communication interface, a processor and a memory. The network node processor controls the transceiver to communicate with the processor of the network device via the network communication interface so that the processor of the network device (1 ) to generate the update of the workflow based on knowledge obtained from the captured audio content, (2) to store the update of the workflow in the memory, and/or (3) to perform Automated Speech Recognition, ASR relative to the captured audio content to generate an ASR output, and/or text processing of the ASR output.

[0016] According to yet another embodiment, there is a network device having a transceiver module, and a control module. The transceiver module is configured to enable communication with other devices in a network. The control module is configured to update a workflow designed for troubleshooting a fault, based on an audio content captured at a network node where at least one step of the workflow is executed and received via the transceiver module. BRIEF DESCRIPTION OF THE DRAWINGS

[0017] The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate one or more embodiments and, together with the description, explain these embodiments. In the drawings:

[0018] Figure 1 is a sequence diagram of the bootstrapping process;

[0019] Figure 2 exemplarily illustrates a workflow;

[0020] Figure 3 is an example of a workflow description document;

[0021] Figure 4 illustrates a troubleshooting process according to an

embodiment;

[0022] Figure 5 is flowchart of a method according to an embodiment;

[0023] Figure 6 is block diagram of a device according to an embodiment;

[0024] Figure 7 is an exemplary illustration of a cloud deployment according to an embodiment; and

[0025] Figure 8 is a schematic representation of a network device according to an embodiment.

DETAILED DESCRIPTION

[0026] The following description of the embodiments refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. The following detailed description does not limit the invention.

Instead, the scope of the invention is defined by the appended claims. The

embodiments are described in the context of a wireless network, but may be applied in a cloud network, and employing devices that may pertain to different communication networks.

[0027] Reference throughout the specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with an embodiment is included in at least one embodiment of the subject matter disclosed. Thus, the appearance of the phrases "in one embodiment" or "in an embodiment" in various places throughout the specification is not necessarily referring to the same embodiment. Further, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.

[0028] In various embodiments, audio content is captured at the troubleshooting site where a fault at a network node is resolved using a workflow. The workflow is then updated based on the captured audio content. To better explain the interactions and techniques in various scenarios, components and actors are described below. The system components are logical and can be implemented in the same or different physical nodes.

[0029] The term "engineers" refers to technical personal who troubleshoot a faulty network node at its site. The network node exhibiting some type of fault that makes troubleshooting necessary has two interfaces: an operation and maintenance (O&M) interface, and a "Media In" interface. [0030] The O&M interface allows engineers access to configure the node and includes user authentication functionality to check user credentials (such as username and password, for example). Additionally, this O&M interface may be configured to request a workflow step-by-step while advancing to fault resolution. The O&M interface may be an interactive audio-visual or augmented reality interface; for example, a digital assistant type of expert system that engages in dialogue with the engineer. Thus, engineers do not have to type commands into a terminal as in the case of a command- line interface.

[0031] The "Media In" interface is used for capturing audio (and potentially also video) on-site. This interface may include an analog-to-digital convertor, the codecs (e.g., MPEG-3 as described , for example, in ISO/IEC 13818-3:1998 on "Information technology - Generic coding of moving pictures and associated audio information -Part 3: Audio, G.71 1 , as described, for example, in ITU-T G.71 1 entitled "Pulse control modulation (PCM) of voice frequencies" from GENERAL ASPECTS OF DIGITAL TRANSMISSION SYSTEMS - Terminal Equipments etc.) and the protocol stack (e.g., Session Initiation Protocol, SIP, as described for example in Request for Comments 3261 of 2002 or H.323 as described for example in ITU-T H.323 of 2009 entitled "Packet-based multimedia systems" from SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services - Systems and terminal equipment for audiovisual services), as well as a network interface to transmit the audio stream to the Workflow Processor Node. An example implementation is discussed later.

[0032] A Workflow Processor Node serves a dual purpose: it hosts media- and text-processing functions and authorizes engineers and provides the workflows via the O&M interface to the authorized engineers. The Workflow Processor Node may convert captured audio to text, e.g., using Automated Speech Recognition (ASR), and may also extract text from images using optical image recognition. Text and image analysis may then be used to update the workflow or even generate a new workflow.

[0033] Engineers' credentials and other relevant metadata may be stored in a user directory, which may be a directory server such as Active Directory, accessed via Lightweight Directory Access Protocol (LDAP). The user directory enables validation of user credentials (e.g., via LDAP "bind" feature), but can also supply user metadata on request. Of particular interest for some embodiments is the nationality of the user, suggesting his/her mother tongue, and other languages spoken, and therefore can be used as input to the media-processing function mentioned above to increase speech recognition accuracy.

[0034] A Workflow Store stores workflows described in formal language (such as Yet Another Workflow Language - YAWL, or Common Workflow Language - CWL). Spatiotemporal conditions information such as location of the network node(s) repaired by a workflow as well as when these nodes were repaired may be stored in association with the workflow.

[0035] When the engineers arrive at the network node that needs

troubleshooting, they begin by executing a bootstrapping process of the troubleshooting session. This bootstrapping process involves two consecutive sub-processes, the login process and the get-workflow process. A sequence diagram of the bootstrapping process is illustrated in Figure 1. [0036] The engineers initiate the login process by sending a login request to the network node's O&M interface (OMjnterface) at S1 10. The login request may include user-supplied information, i.e., one or more of the engineers' credentials, the node's location and time of the request. At S1 15, the network node 101 forwards the login request to the workflow processor that is physically located on another network device. In Figure 1 , the workflow processor 102 is illustrated as executing a localization module (Localization_Module) and a workflow management module (WF_mgmt_Module). The localization module is employed in the login process, and the workflow management module in providing the workflow. These modules may be executed on the same hardware or on different hardware.

[0037] The localization module sends an authorization and user data retrieval request to the user directory at S120. The user directory authorizes the user if the user- supplied credentials match the corresponding information stored in the directory, for example, a <username, password> tuple matches one already present in the directory. Besides an indication that the user is authorized, the user directory may provide information regarding user nationality to the localization module at S125. The nationality may later be used to increase speech recognition accuracy. Session information is stored in the localization module at S130. The module sends an acknowledgement indicating successful login to the node's O&M interface at S135. The node's O&M interface provides a login complete indication to the engineers at S140.

[0038] After the login process ends successfully, the get-workflow process begins when engineers request a workflow from the workflow management module at S150. The workflow request may include the time of the request, type of fault and the location, as well as other potentially relevant information. The type of fault depends on the hardware and/or software being troubleshooted, and can be in any appropriate format for example, a form containing a few keywords such as "subscriber database corruption" or "memory overflow", to machine-readable descriptions together with error logs. The workflow request is sent from the O&M interface to the workflow

management module at S155. An example of this request using the hypertext transfer protocol (HTTP) POST method is provided below.

POST /wfrequest HTTP/1.1

Host : workflow_manager . wfprocessor

User-Agent: OM_interface

Connection : keep-alive

Content-Type: application/json

Content- Length: 69

{"fault_format" : "keywords", "fault_description" : [ "memory

corruption" ] , "productID" : "ProductX"}

[0039] The workflow management module processes the received fault type at

S160 and maps it to a workflow having a predefined workflow ID. The workflow management module then sends a retrieve workflow request, which may include the workflow ID, to the Workflow Store at S165. In response, the workflow management module receives the workflow at 170, which it may temporarily store at S175, before forwarding it to the network node. The workflow may be sent from the workflow management module to the network node step-by-step, beginning with the first step at 180. The network node presents the received step to the engineers at 190.

[0040] As previously described, a workflow consists of a series of steps. The execution of a workflow is done in a stepwise manner (i.e., step-by-step) starting from an initial step, then proceeding through the consecutive steps. Every step is characterized by one or more operational parameters defining actions performed to complete the step. Some of the steps can be automated, for example, via automated shell scripts executed by the network node. Some other steps may need engineers' intervention, in which case, the workflow would only include a description of the actions the engineers are instructed to perform. An example of code specifying such operational parameters for a workflow step is provided below.

{

"info-" 'Operational Pa rameters for Step 1",

"stepNumber : 1,

OperationalPa rameters : [

{

"sequence-"

"type-" : manual j

"desc ription-" : "Chec k whether volatile memory is corrupt by performing basic memory tests-"

{

"sequence-" : 1,

"type-" : automated ,

"desc ription-" : "Run contingency shell sc ript for memory overflow" , "u ri-" : "nf s : //sc ript server/cont ingencyMO . sir"

} ]

}

[0041] The engineers may also discuss actions they are instructed to perform with the operational parameters while they are performing them. Such conversations are captured, and, if deemed relevant, used to update the workflow. This updated workflow may be validated by the engineers, and then uploaded to the Workflow Store. [0042] Figure 2 exemplarily illustrates a workflow presented to the engineers in visual form. Depending on the type and output of actions described in operational parameters, the engineers on-site may take a different sequence of steps (e.g., the branching in step 2 of Figure 2). Operational parameters (p1 , p2, ...) represent the state of the system being troubleshooted at a given step. For example, in the situation of temperature-related alarm, P2 = {currentTemp = 50 degrees Celsius; currentFanSpeed = 100 RPM}. The conditions represent diagnosis statements evaluated to determine a next step of the workflow. For example, C2.1 ={currentTemp >= 60 && currentFanSpeed <= 50} and C2.2={currentTemp < 60 && currentFanSpeed > 50}. If C2.1 is true, it means that the fans do not operate properly, not rotating fast enough. If C2.2 is true, it is likely that the thermostat is malfunctioning as the fans are working too fast for a given temperature.

[0043] The Workflow Store stores workflow description documents such as the one in Figure 3, together with workflow metadata. An exemplary set of records is shown below.

Record

No.

Smith",

"troubleshooter_feedback": {[ {

"troubleshooterl":

"name":"John Smith", "localization": {

"location":"Sweden", "operator" :"OperatorX" } ,

"engineerCredentials": { "nationality": "Swedish"

}

}

]}

}

WF Document 2 { {

in text form , . .. „ , . , .

description: subscription

j, management sybsystem, "product_name":"ProductY",

^ U e subscriber database, « , · , · „ ,„ ,

, ' creation time :YY-MM- redundancy in subscriber ~

data" severity:"critical" '

, "last_update":YY-MM-DD,

"verified_by":"Nick Cave"

} [0044] In the above exemplary representation of workflow data, JavaScript Object Notation (JSON) is used for classifying the fault type in a hierarchical fashion, starting from the subsystem in the network node exhibiting the fault, moving to the type of fault and explicit observed behaviour. Other classification types may be used.

[0045] Mapping of fault type of the workflow request from the O&M interface of the network node can be done using lexicographical matching or another type of matching, e.g., using semantic similarity search.

[0046] Figure 4 illustrates a troubleshooting process according to an exemplary embodiment. An audio capturing interface (MediaJN) enables capturing conversations between engineers while troubleshooting the fault. This interface may be configured to also capture video and/or images.

[0047] A Radio Base Station (RBS) of a Radio Access Network (RAN) part of the cellular network may be the faulty network node 401. Troubleshooting a remote RBS is a real issue because it may be hard to reach and/or difficult to troubleshoot due to its complexity. The functionality described in the workflow processor 402, the Workflow Store and the user directory, can reside in one or more physical nodes of the cellular network.

[0048] The troubleshooting process includes a loop 400 over workflow steps and an inner loop 410 that occurs whenever audio data is captured. Loop 400 includes communications (S415, S425, S435, S445) between engineers, the O&M interface and the workflow management module, for providing a workflow step to the engineers.

[0049] In loop 410, discussion between engineers is captured as audio data by the network node's audio-capturing interface at S450. The network node's audio- capturing interface forwards the audio data to the media-processing module at S455. The media-processing module may retrieve user data such as a user's nationality from the localization module at S460, S465, before applying a speech recognition algorithm to the user data to obtain conversation text at S470, then forwarded to the text- processing module at S475. The text-processing module converts the conversation text into workflow instructions at S480, and then sends the workflow instructions to the workflow management module. The workflow management module then updates the workflow at S490. The updated workflow may optionally be presented to experts for validation as illustrated in box 495. The workflow management module sends the updated workflow to the Workflow Store at S497.

[0050] Figure 5 is a flowchart of a method 500 for troubleshooting a fault at a network node using a workflow according to an embodiment. The method, which may be performed by the network node in Figure 4, includes capturing audio content while at least one step of the workflow is executed at S510, and causing an update of the workflow based on the captured audio content at S520. However, the method may be performed by another physical device the engineers brought on-site when the network node is incapacitated to the extent of not being able to perform as expected (i.e., execute the modules illustrated in Figures 1 and 4). The update may be generated on the same physical device that captures the audio content.

[0051] The method may include supplying spatiotemporal conditions information related to the fault for the update (as, for example, indicated in S1 15 of Figure 1 ). The method may further include triggering Automated Speech Recognition, ASR, relative to the captured audio content to generate an ASR output, and initiating text processing of the ASR output (see, e.g., S455 in Figure 4, which causes S460-S470).

[0052] In some embodiments, since a user login (see, e.g., S1 10-S140 in Figure 1 ) has occurred prior to troubleshooting the fault, the method further includes

establishing one or more user parameters (e.g., nationality, user's language) during the user login, the parameter(s) being used for the ASR and/or the text processing. The parameter(s) may be retrieved (e.g., from User-Directory) or may be acquired via interactions with the engineers. Another parameter that may be acquired during login is expertise level, which may be a valuable indicator for the text processing extracting knowledge in the form of new workflow instructions from the ASR output. The expertise level may be predefined depending on education level, years of experience in the field, etc.

[0053] Besides acquiring audio content, the method may also include acquiring video content and/or images to be used for the update. The device executing the method may then trigger, or execute, pattern recognition relative to the acquired video content and/or images.

[0054] The method may then include submitting the update of the workflow to be stored (see, e.g., S497 in Figure 4).

[0055] Figure 6 is a block diagram of a device 600 configured to perform at least some of the above-described methods. Device 600, which is connected to network 612, includes a transceiver represented in Figure 6 as separate receiver 610 and transmitter 620, but may be a single piece of hardware. Device 600 further includes at least one processor 630, a data storage unit 640 and a user interface 635. Processor 630 is configured to control the user interface 635 to capture audio content while at least one step of a workflow for troubleshooting a fault is executed, and to cause an update of the workflow based on the captured audio content. The data storage unit 640 may store executable codes which, when executed by the processor, make the processor perform the methods described in this section.

[0056] The network devices discussed in this section are logical units and may be placed in a distributed fashion across a network. Figure 7 illustrates a potential deployment in the network of a mobile network operator. The network communication system 700 includes a network node 710 (which may be the device illustrated and described relative to Figure 6) and at least another network device 720. Network device 720 also has a network communication interface, a processor and a memory.

Additional clouds of ASR solution vendor and TPR vendor provide an Automated Speech Recognition Media Processing Module and Text Processing Module, respectively.

[0057] Other embodiments of the methods described in this section are computer programs with instructions that make a data processing unit carry out these methods, respectively.

[0058] According to one embodiment illustrated in Figure 8, a network device 800 includes a transceiver module 810 configured to enable communication with other devices in a network, and a control module 820 configured to update a workflow designed for troubleshooting a fault, based on audio content captured at a network node where at least one step of the workflow is executed and received via the transceiver module. The transceiver module and the control module are combinations of hardware and software.

[0059] The embodiments disclosed in this section provide methods and network devices that are able to update a workflow based on captured audio content. The embodiments speed up the update and validation of the workflow. New knowledge or a workflow step is added in a workflow using conversational knowledge from engineers involved in a troubleshooting process, without (or with minimal) post-facto human interaction. The embodiments may also enable adding contextual information in a workflow, for example, specific spatiotemporal conditions, that can later be used when similar problems arise under the same context.

[0060] It should be understood that this description is not intended to limit the invention. On the contrary, the exemplary embodiments are intended to cover alternatives, modifications and equivalents, which are included in the scope of the invention. Further, in the detailed description of the exemplary embodiments, numerous specific details are set forth in order to provide a comprehensive understanding of the invention. However, one skilled in the art would understand that various embodiments may be practiced without such specific details.

[0001] Although the features and elements of the present exemplary

embodiments are described in the embodiments in particular combinations, each feature or element can be used alone without the other features and elements of the embodiments or in various combinations with or without other features and elements disclosed herein. The methods or flowcharts provided in the present application may be implemented in a computer program, software or firmware tangibly embodied in a computer-readable storage medium for execution by a computer or a processor.

This written description uses examples of the subject matter disclosed to enable any person skilled in the art to practice the same, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the subject matter is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims.