DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Low-Code Development: Leverage low and no code to streamline your workflow so that you can focus on higher priorities.

DZone Security Research: Tell us your top security strategies in 2024, influence our research, and enter for a chance to win $!

Launch your software development career: Dive head first into the SDLC and learn how to build high-quality software and teams.

Open Source Migration Practices and Patterns: Explore key traits of migrating open-source software and its impact on software development.

Related

  • A Deep Dive Into Distributed Tracing
  • 4 Key Observability Metrics for Distributed Applications
  • Developers Get Real-Time Analytics for Observability and Anomaly Detection
  • Revisiting Observability: A Deep Dive Into the State of Monitoring, Costs, and Data Ownership in Today’s Market

Trending

  • Strategies for Building Self-Healing Software Systems
  • Knowledge Graph Enlightenment, AI, and RAG
  • Automate Message Queue Deployment on JBoss EAP
  • Mastering Unstructured Data Chaos With Datadobi StorageMAP 7.0
  1. DZone
  2. Data Engineering
  3. Data
  4. Operational Excellence Best Practices

Operational Excellence Best Practices

The article explores efforts to stabilize a critical service by enhancing observability and implementing service protection.

By 
Poonam Pradhan user avatar
Poonam Pradhan
·
Jul. 02, 24 · Opinion
Like (2)
Save
Tweet
Share
1.5K Views

Join the DZone community and get the full member experience.

Join For Free

In the summer of 2023, my team entered into a code yellow to stabilize the health of the service we own. This service powers the visualization on the dashboard product. The decision was made following high-severity incidents impacting the availability of the service.  

For context, the service provides aggregation data to dashboard visualizations by gathering aggregations through the data pipeline. This is a critical path service for dashboard rendering. Any impact on the availability of this service manifests itself as dashboard viewers experiencing delays in rendering visualizations and rendering failures in some cases.  

Exit Criteria for Code Yellow

  •  Fix the scaling issues on the service using the below mechanisms:
    • Enhance observability into service metrics
    • Implement service protection mechanisms
  • Investigate and implement asynchronous execution of long-running requests
  • Mitigation mechanisms to recover service below 10 minutes

Execution Plan

1. Enhance observability into service metrics

  • Enhanced service request body logging
  • Replaying traffic to observe patterns after the incident. 
    • Since this service was read-only, and the underlying data was not being modified hence we could rely on the replays.

2. Service protection mechanisms

  • An auto restart of the child thread on request hangs.
  • Request throttling to protect against overwhelming

3. Investigate and implement asynchronous execution of long-running requests

  • Replace deprecated packages (Request with Axios)
  •  Optimizing slow-running operations 

Key Takeaways

  • Enhanced tooling helped to isolate problematic requests and requests with poor time complexity blocking the node event loop. 
  • We set up traffic capture and replay of the traffic on demand. 
  • Also, CPU profiling and distributed tracing as observability improvements.
  • Optimizations on the critical path: Efforts to optimize operations on the critical path have yielded ~40% improvement in the average latencies across DCs. These efforts include (but are not limited to) package upgrades such as the replacement of Request with Axios (Promise-based HTTP client), caching optimizations — unintentional cache misses, and identification of caching opportunities.
  • Scale testing and continuous load testing framework are set up to monitor the service’s scale needs. 
  • Mitigation mechanisms rolled out. This included node clustering mode with an auto restart of the thread when an event loop gets blocked. 
  • Request throttling is implemented to protect the service in case of bad requests
  • Better alert configuration bringing down the time to detect anomalies below 5 minutes in most recent incidents
  • Clear definition of Sev-1/Sev-2 Criteria: We now have clear sev-1/sev-2 criteria defined. This is to help on-calls quickly assess if the system is in a degraded state and whether or not they need to pull the sev-2 trigger to get help. 

Next Steps

  • To further the momentum of the operation excellence, the plan is to perform quarterly resiliency game days to find the system’s weaknesses and respond gracefully in the event of failures.
  • Re-evaluate the Northstar architecture of the service to meet the scaling needs of the future.

At this point, I feel more confident in our overall operational posture and better equipped to deal with potential incidents in the future. At the same time, I recognize that operational improvements are a continuous process, and we will continue to build on top of the work done as a part of Code Yellow.

All the opinions are mine and are not affiliated with any product/company. 

Observability Data (computing) Requests

Opinions expressed by DZone contributors are their own.

Related

  • A Deep Dive Into Distributed Tracing
  • 4 Key Observability Metrics for Distributed Applications
  • Developers Get Real-Time Analytics for Observability and Anomaly Detection
  • Revisiting Observability: A Deep Dive Into the State of Monitoring, Costs, and Data Ownership in Today’s Market

Partner Resources


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: