The last topic to mention in the context of slowness has to do with overutilized compute resources. This is one of the most common scenarios you will encounter. The metrics you configure to monitor the health of your data analytics pipeline should target this specifically. When those metrics show that compute resources are under pressure, the action to take is to increase the allocated amount by scaling. Scaling on Azure is very easy, as shown in Figure 10.19.

FIGURE 10.19 Troubleshooting a failed pipeline run: scaling a dedicated SQL pool

Making a few configurations on the additional amount to allocate and clicking the Save button is all it takes to add more compute to the dedicated SQL pool. It is just as easy with an Apache Spark pool, as shown in Figure 10.20.

FIGURE 10.20 Troubleshooting a failed pipeline run: scaling an Apache Spark cluster

Keep in mind that scaling will have some transient impact on your pipelines, so you should not perform a scale up or down while a pipeline or any other process is running. You might not be in a situation that allows the stoppage of all operations that run against your nodes, but you should try to avoid scaling procedures while the compute resources are being used.

The last form of troubleshooting has to do with the unavailability of a dependent resource. As mentioned in the last section, there are many reasons, for example, new firewall rules or a crash that rendered the dependent application inaccessible. The issue that was used earlier concerning the AzureSQL activity is a good example of unavailability. The availably issue was caused by the dedicated SQL pool being in a paused state; once it was brought online, the issue was resolved. Refer to Table 10.1, which covers many reasons that can cause exceptions, slowness, and the unavailability of your data analytics pipelines.

The remainder of this section will cover the following three topics:

  • Debug mode and debugging
  • Error handling
  • Retry and rerun

The pipeline design canvas includes a Debug option (refer to Figure 10.3). When you click the Debug button, the activities that make up the pipeline are executed only within the context of your local session. This means the operations performed by the activities are visible only to you. After you are confident with the changes and are ready to move to the next phase of deployment, you can commit the changes to your source code repository and publish them. Once published, the changes you made will be visible to others and available for release into your production environment. You learned about testing and release processes in Chapter 9. The point here is that debugging is performed during the development phase against test data until the developer has met all the requirements. Use the Debug button to perform this testing during this phase of the project.

Now refer to Figure 10.13 and Figure 10.17. In addition to the Debug button, there is a toggle switch named Data Flow Debug. This toggle switch appears only when a Data Flow activity is in the pipeline. When the toggle switch is present and not yet enabled, select the Debug button or toggle the switch to the on position, which renders the pop‐out window shown in Figure 10.21.

FIGURE 10.21 Troubleshooting a failed pipeline run: enabling Data Flow Debug

Leave a Reply

Your email address will not be published. Required fields are marked *