Open source refers to non-proprietary software that allows anyone to modify, enhance, or view the source code behind it. Our resources enable programmers to work or collaborate on projects created by different teams, companies, and organizations.
A Hands-On Guide To OpenTelemetry: Intro to Observability
Cucumber and Spring Boot Integration: Passing Arguments To Step Definitions Explained
ClickHouse is the fastest, most resource-efficient OLAP database which can query billions of rows in milliseconds and is trusted by thousands of companies for real-time analytics. Here are seven tips to help you spin up a production ClickHouse cluster and avoid the most common mistakes. Tip 1: Use Multiple Replicas While testing ClickHouse, it’s natural to deploy a configuration with only one host because you may not want to use additional resources or take on unnecessary expenses. There’s nothing wrong with this in a development or testing environment, but that can come at a cost if you want to use only one host in production. If there’s a failure and you only have one replica and a single host, you’re at risk of losing all your data. For production loads, you should use several hosts and replicate data across them. Not only does it ensure that data remains safe when a host fails, but also allows you to balance the user load on several hosts, which makes resource-intensive queries faster. Tip 2: Don’t Be Shy With RAM ClickHouse is fast, but its speed depends on available resources, especially RAM. You can see great performance when running a ClickHouse cluster with the minimum amount of RAM in a development or testing environment, but that may change when the load increases. In a production environment with a lot of simultaneous read and write operations, a lack of RAM will be more noticeable. If your ClickHouse cluster doesn’t have enough memory, it will be slower, and executing complex queries will take longer. On top of that, when ClickHouse is performing resource-intensive operations, it may compete with the OS itself for RAM, and that eventually leads to OOM, downtime, and data loss. Developers of ClickHouse recommend using at least 16 GB of RAM to ensure that the cluster is stable. You can opt for less memory, but only do so when you know that the load won’t be high. Tip 3: Think Twice When Choosing a Table Engine ClickHouse supports several table engines with different characteristics, but a MergeTree engine will most likely be ideal. Specialized tables are tailored for specific uses, but have limitations that may not be obvious at first glance. Log Family engines may seem ideal for logs, but they don’t support replication and their database size is limited. Table engines in the MergeTree family are the default choice, and they provide the core data capabilities that ClickHouse is known for. Unless you know for sure why you need a different table engine, use an engine from a MergeTree family, and it will cover most of your use cases. Tip 4: Don’t Use More Than Three Columns for the Primary Key Primary keys in ClickHouse don’t serve the same purpose as in traditional databases. They don’t ensure uniqueness, but instead define how data is stored and then retrieved. If you use all columns as the primary key, you may benefit from faster queries. Yet, ClickHouse performance doesn’t only depend on reading data, but on writing it, too. When the primary key contains many columns, the whole cluster slows down when data is written to it. The optimal size of the primary key in ClickHouse is two or three columns, so you can run faster queries but not slow down data inserts. When choosing the columns, think of the requests that will be made and go for columns that will often be selected in filters. Tip 5: Avoid Small Inserts When you insert data in ClickHouse, it first saves a part with this data to a disk. It then sorts this data, merges it, and inserts it into the right place in the database in the background. If you insert small chunks of data very often, ClickHouse will create a part for every small insert. It will slow down the whole cluster and you may get the “Too many parts” error. To insert data efficiently, add data in big chunks and avoid sending more than one insert statement per second. ClickHouse can insert a lot of data at a high pace — even 100K rows per second is okay — but it should be one bulk insert instead of multiple smaller ones. If your data comes in small portions, consider using an external system such as Managed Kafka for making batches of data. ClickHouse is well integrated with Kafka and can efficiently consume data from it. Tip 6: Think of How You Will Get Rid of Duplicate Data Primary keys in ClickHouse don’t ensure that data is unique. Unlike other databases, if you insert duplicate data in ClickHouse, it will be added as is. Thus, the best option would be to ensure that the data is unique before inserting it. You can do it, for example, in a stream processing application, like Apache Kafka. If it’s not possible, there are ways to deal with it when you run queries. One option is to use `argMax` to select only the last version of the duplicate row. You can also use the ReplacingMergeTree engine that removes duplicate entries by design. Finally, you can run `OPTIMIZE TABLE ... FINAL` to merge data parts, but that’s a resource-demanding operation, and you should only run it when you know it won’t affect the cluster performance. Tip 7: Don’t Create an Index for Every Column Just like with primary keys, you may want to use multiple indexes to improve performance. This may be the case when you query data with the filters that match an index, but overall it won’t help you make queries faster. At the same time, you’ll certainly experience the downsides of this strategy. Multiple indexes significantly slow down data inserts because ClickHouse will need to both write the data in the correct place and then update indexes. When you want to create indexes in a production cluster, select the columns that correlate with the primary key.
As developers, we're constantly seeking ways to streamline our workflows and enhance the performance of our applications. One tool that has gained significant traction in the React ecosystem is Redux Toolkit Query (RTK Query). This library, built on top of Redux Toolkit, provides a solution for managing asynchronous data fetching and caching. In this article, we'll explore the key benefits of using RTK Query. The Benefits of Using RTK Query: A Scalable and Efficient Solution 1. Simplicity and Ease of Use One of the most compelling advantages of RTK Query is its simplicity. This is how one would easily define endpoints for various operations, such as querying data, and creating, updating, and deleting resources. The injectEndpoints method allows you to define these endpoints in a concise and declarative manner, reducing boilerplate code and improving readability. TypeScript booksApi.injectEndpoints({ endpoints: builder => ({ getBooks: builder.query<IBook[], void | string[]>({ // ... }), createBundle: builder.mutation<any, void>({ // ... }), addBook: builder.mutation<string, AddBookArgs>({ // ... }), // ... }), }); 2. Automatic Caching and Invalidation One of the features of RTK Query is its built-in caching mechanism. The library automatically caches the data fetched from your endpoints, ensuring that subsequent requests for the same data are served from the cache, reducing network overhead and improving performance. These examples demonstrate how RTK Query handles cache invalidation using the invalidatesTags option. TypeScript createBundle: builder.mutation<any, void>({ invalidatesTags: [BooksTag], // ... }), addBook: builder.mutation<string, AddBookArgs>({ invalidatesTags: [BooksTag], // ... }), By specifying the BooksTag, RTK Query knows which cache entries to invalidate when a mutation (e.g., createBundle or addBook) is performed, ensuring that the cache stays up-to-date and consistent with the server data. 3. Scalability and Maintainability As your application grows in complexity, managing asynchronous data fetching and caching can become increasingly challenging. RTK Query's modular approach and separation of concerns make it easier to scale and maintain your codebase. Each endpoint is defined independently, allowing you to easily add, modify, or remove endpoints as needed without affecting the rest of your application. TypeScript endpoints: builder => ({ getBooks: builder.query<IBook[], void | string[]>({ // ... }), createBundle: builder.mutation<any, void>({ // ... }), // ... }) This modular structure promotes code reusability and makes it easier to reason about the different parts of your application, leading to better maintainability and collaboration within your team. 4. Efficient Data Fetching and Normalization RTK Query provides built-in support for efficient data fetching and normalization. The queryFn shows how you can fetch data from multiple sources and normalize the data using the toSimpleBooks function. However, the current implementation can be optimized to reduce code duplication and improve readability. Here's an optimized version of the code: TypeScript async queryFn(collections) { try { const [snapshot, snapshot2] = await Promise.all( collections.map(fetchCachedCollection) ]); const success = await getBooksBundle(); const books = success ? toSimpleBooks([...snapshot.docs, ...snapshot2.docs]) : []; return { data: books }; } catch (error) { return { error }; } } In this optimized version, we're using Promise.all to fetch the two collections (latest-books-1-query and latest-books-2-query) concurrently. This approach ensures that we don't have to wait for one collection to finish fetching before starting the other, potentially reducing the overall fetching time. Additionally, we've moved the getBooksBundle call inside the try block, ensuring that it's executed only if the collections are fetched successfully. This change helps maintain a clear separation of concerns and makes the code easier to reason about. By leveraging RTK Query's efficient data fetching capabilities and employing best practices like Promise.all, you can ensure that your application fetches and normalizes data in an optimized and efficient manner, leading to improved performance and a better user experience. 5. Ease of Use With Exposed Hooks One of the standout features of RTK Query is the ease of use it provides through its exposed hooks. Finally, I like to export the available generated typed hooks so you can use them (i.e, useGetBooksQuery, useCreateBundleMutation, etc.) to interact with the defined endpoints within your React components. These hooks abstract away the complexities of managing asynchronous data fetching and caching, allowing you to focus on building your application's logic. TypeScript export const { useGetBooksQuery, useLazyGetBooksQuery, useCreateBundleMutation, useAddBookMutation, useDeleteBookMutation, useUpdateBookMutation, } = booksApi; By leveraging these hooks, you can fetch data, trigger mutations, and handle loading and error states, all while benefiting from the caching and invalidation mechanisms provided by RTK Query. Conclusion By adopting RTK Query, you gain access to a solution for managing asynchronous data fetching and caching, while experiencing the simplicity, scalability, and ease of use provided by its exposed hooks. Whether you're building a small application or a large-scale project, RTK Query can help you streamline your development process and deliver high-performance, responsive applications. The code within this post is taken from a live app in production, ReadM, a Real-time AI for Reading Fluency Assessments & Insights platform.
I’m a senior solution architect and polyglot programmer interested in the evolution of programming languages and their impact on application development. Around three years ago, I encountered WebAssembly (Wasm) through the .NET Blazor project. This technology caught my attention because it can execute applications at near-native speed across different programming languages. This was especially exciting to me as a polyglot programmer since my programming expertise ranges across multiple programming languages including .NET, PHP, Node.js, Rust, and Go. Most of the work I do is building cloud-native enterprise applications, so I have been particularly interested in advancements that broaden Wasm’s applicability in cloud-native development. WebAssembly 2.0 was a significant leap forward, improving performance and flexibility while streamlining integration with web and cloud infrastructures to make Wasm an even more powerful tool for developers to build versatile and dynamic cloud-native applications. I aim to share the knowledge and understanding I've gained, providing an overview of Wasm’s capabilities and its potential impact on the cloud-native development landscape. Polyglot Programming and the Component Model My initial attraction to WebAssembly stemmed from its capability to enhance browser functionalities for graphic-intensive and gaming applications, breaking free from the limitations of traditional web development. It also allows developers to employ languages like C++ or Rust to perform high-efficiency computations and animations, offering near-native performance within the browser environment. Wasm’s polyglot programming capability and component model are two of its flagship capabilities. The idea of leveraging the strengths of various programming languages within a unified application environment seemed like the next leap in software development. Wasm offers the potential to leverage the unique strengths of various programming languages within a single application environment, promoting a more efficient and versatile development process. For instance, developers could leverage Rust's speed for performance-critical components and .NET's comprehensive library support for business logic to optimize both development efficiency and application performance. This led me to Spin, an open-source tool for the creation and deployment of Wasm applications in cloud environments. To test Wasm’s polyglot programming capabilities, I experimented with the plugins and middleware models. I divided the application business logic into one component, and the other component with the Spin component supported the host capabilities (I/O, random, socket, etc.) to work with the host. Finally, I composed with http-auth-middleware, an existing component model from Spin for OAuth 2.0, and wrote more components for logging, rate limit, etc. All of them were composed together into one app and run on the host world (Component Model). Cloud-Native Coffeeshop App The first app I wrote using WebAssembly was an event-driven microservices coffeeshop app written in Golang and deployed using Nomad, Consul Connect, Vault, and Terraform (you can see it on my GitHub). I was curious about how it would work with Kubernetes, and then Dapr. I expanded it and wrote several use cases with Dapr such as entire apps with Spin, polyglot apps (Spin and other container apps with Docker), Spin apps with Dapr, and others. What I like about it is the speed of start-up time (it’s very quick to get up and running), and the size of the app – it looks like a tiny but powerful app. The WebAssembly ecosystem has matured a lot in the past year as it relates to enterprise projects. For the types of cloud-native projects I’d like to pursue, it would benefit from a more developed support system for stateful applications, as well as an integrated messaging system between components. I would love to see more capabilities that my enterprise customers need such as gRPC or other communication protocols (Spin currently only supports HTTP), data processing and transformation like data pipelines, a multi-threading mechanism, CQRS, polyglot programming language aggregations (internal modular monolith style or external microservices style), and content negotiation (XML, JSON, Plain-text). We also need real-world examples demonstrating Wasm’s capabilities to tackle enterprise-level challenges, fostering a better understanding and wider technology adoption. We can see how well ZEISS does from their presentation at KubeCon in Paris last month. I would like to see more and more companies like them involved in this game, then, from the developer perspective, we will benefit a lot. Not only can we easily develop WebAssembly apps, but many enterprise scenarios shall also be addressed, and we will work together to make WebAssembly more handy and effective. The WebAssembly Community Sharing my journey with the WebAssembly community has been a rewarding part of my exploration, especially with the Spin community who have been so helpful in sharing best practices and new ideas. Through tutorials and presentations at community events, I've aimed to contribute to the collective understanding of WebAssembly and cloud-native development, and I hope to see more people sharing their experiences. I will continue creating tutorials and educational content, as well as diving into new projects using WebAssembly to inspire and educate others about its potential. I would encourage anyone getting started to get involved in the Wasm community of your choice to accelerate your journey. WebAssembly’s Cloud-Native Future I feel positive about the potential for WebAssembly to change how we do application development, particularly in the cloud-native space. I’d like to explore how Wasm could underpin the development of hybrid cloud platforms and domain-specific applications. One particularly exciting prospect is the potential for building an e-commerce platform based on WebAssembly, leveraging its cross-platform capabilities and performance benefits to offer a superior user experience. The plugin model existed for a long time in the e-commerce world (see what Shopify did), and with WebAssembly’s component model, we can build the application with polyglot programming languages such as Rust, Go, TypeScript, .NET, Java, PHP, etc. WebAssembly 2.0 supports the development of more complex and interactive web applications, opening the door for new use cases such as serverless stateless functions, data transformation, and the full-pledge of web API functionalities, moving into edge devices (some embedded components). New advancements like WASI 3.0 with asynchronous components are bridging the gaps. I eagerly anticipate the further impact of WebAssembly on our approaches to building and deploying applications. We’re just getting started.
The world of Telecom is evolving at a rapid pace, and it is not just important, but crucial for operators to stay ahead of the game. As 5G technology becomes the norm, it is not just essential, but a strategic imperative to transition seamlessly from 4G technology (which operates on OpenStack cloud) to 5G technology (which uses Kubernetes). In the current scenario, operators invest in multiple vendor-specific monitoring tools, leading to higher costs and less efficient operations. However, with the upcoming 5G world, operators can adopt a unified monitoring and alert system for all their products. This single system, with its ability to monitor network equipment, customer devices, and service platforms, offers a reassuringly holistic view of the entire system, thereby reducing complexity and enhancing efficiency. By adopting a Prometheus-based monitoring and alert system, operators can streamline operations, reduce costs, and enhance customer experience. With a single monitoring system, operators can monitor their entire 5G system seamlessly, ensuring optimal performance and avoiding disruptions. This practical solution eliminates the need for a complete overhaul and offers a cost-effective transition. Let's dive deep. Prometheus, Grafana, and Alert Manager Prometheus is a tool for monitoring and alerting systems, utilizing a pull-based monitoring system. It scrapes, collects, and stores Key Performance Indicators (KPI) with labels and timestamps, enabling it to collect metrics from targets, which are the Network Functions' namespaces in the 5G telecom world. Grafana is a dynamic web application that offers a wide range of functionalities. It visualizes data, allowing the building of charts, graphs, and dashboards that the 5G Telecom operator wants to visualize. Its primary feature is the display of multiple graphing and dashboarding support modes using GUI (Graphical user interface). Grafana can seamlessly integrate data collected by Prometheus, making it an indispensable tool for telecom operators. It is a powerful web application that supports the integration of different data sources into one dashboard, enabling continuous monitoring. This versatility improves response rates by alerting the telecom operator's team when an incident emerges, ensuring a minimum 5G network function downtime. The Alert Manager is a crucial component that manages alerts from the Prometheus server via alerting rules. It manages the received alerts, including silencing and inhibiting them and sending out notifications via email or chat. The Alert Manager also removes duplications, grouping, and routing them to the centralized webhook receiver, making it a must-have tool for any telecom operator. Architectural Diagram Prometheus Components of Prometheus (Specific to a 5G Telecom Operator) Core component: Prometheus server scrapes HTTP endpoints and stores data (time series).The Prometheus server, a crucial component in the 5G telecom world, collects metrics from the Prometheus targets. In our context, these targets are the Kubernetes cluster that houses the 5G network functions.Time series database (TSDB): Prometheus stores telecom Metrics as time series data.HTTP Server: API to query data stored in TSDB; The Grafana dashboard can query this data for visualization.Telecom operator-specific libraries (5G) for instrumenting application code.Push gateway (scrape target for short-lived jobs)Service Discovery: In the world of 5G, network function pods are constantly being added or deleted by Telecom operators to scale up or down. Prometheus's adaptable service discovery component monitors the ever-changing list of pods. The Prometheus Web UI, accessible through port 9090, is a data visualization tool. It allows users to view and analyze Prometheus data in a user-friendly and interactive manner, enhancing the monitoring capabilities of the 5G telecom operators.The Alert Manager, a key component of Prometheus, is responsible for handling alerts. It is designed to notify users if something goes wrong, triggering notifications when certain conditions are met. When alerting triggers are met, Prometheus alerts the Alert Manager, which sends alerts through various channels such as email or messenger, ensuring timely and effective communication of critical issues. Grafana for dashboard visualization (actual graphs) With Prometheus's robust components, your Telecom operator's 5G network functions are monitored with diligence, ensuring reliable resource utilization, tracking performance, detection of errors in availability, and more. Prometheus can provide you with the necessary tools to keep your network running smoothly and efficiently. Prometheus Features The multi-dimensional data model identified by metric details uses PromQL (Prometheus Querying Language) as the query language and the HTTP Pull model.Telecom operators can now discover 5G network functions with service discovery and static configuration. The multiple modes of dashboard and GUI support provide a comprehensive and customizable experience for users. Prometheus Remote Write to Central Prometheus from Network Functions 5G Operators will have multiple network functions from various vendors, such as SMF (Session Management Function), UPF (User Plane Function), AMF (Access and Mobility Management Function), PCF (Policy Control Function), and UDM (Unified Data Management). Using multiple Prometheus/Grafana dashboards for each network function can lead to a complex and inefficient 5G network operator monitoring process. To address this, it is highly recommended that all data/metrics from individual Prometheus be consolidated into a single Central Prometheus, simplifying the monitoring process and enhancing efficiency. The 5G network operator can now confidently monitor all the data at the Central Prometheus's centralized location. This user-friendly interface provides a comprehensive view of the network's performance, empowering the operator with the necessary tools for efficient monitoring. Grafana Grafana Features Panels: This powerful feature empowers operators to visualize Telecom 5G data in many ways, including histograms, graphs, maps, and KPIs. It offers a versatile and adaptable interface for data representation, enhancing the efficiency and effectiveness of your data analysis.Plugins: This feature efficiently renders Telecom 5G data in real-time on a user-friendly API (Application Programming Interface), ensuring operators always have the most accurate and up-to-date data at their fingertips. It also enables operators to create data source plugins and retrieve metrics from any API.Transformations: This feature allows you to flexibly adapt, summarize, combine, and perform KPI metrics query/calculations across 5G network functions data sources, providing the tools to effectively manipulate and analyze your data.Annotations: Rich events from different Telecom 5G network functions data sources are used to annotate metrics-based graphs.Panel editor: Reliable and consistent graphical user interface for configuring and customizing 5G telecom metrics panels Grafana Sample Dashboard GUI for 5G Alert Manager Alert Manager Components The Ingester swiftly ingests all alerts, while the Grouper groups them into categories. The De-duplicator prevents repetitive alerts, ensuring you're not bombarded with notifications. The Silencer is there to mute alerts based on a label, and the Throttler regulates the frequency of alerts. Finally, the Notifier will ensure that third parties are notified promptly. Alert Manager Functionalities Grouping: Grouping categorizes similar alerts into a single notification system. This is helpful during more extensive outages when many 5G network functions fail simultaneously and when all the alerts need to fire simultaneously.The telecom operator will expect only to get a single page while still being able to visualize the exact service instances affected.Inhibition: Inhibition suppresses the notification for specific low-priority alerts if certain major/critical alerts are already firing.For example, when a critical alert fires, indicating that an entire 5G SMF (Session Management Function) cluster is not reachable, AlertManager can mute all other minor/warning alerts concerning this cluster.Silences: Silences are simply mute alerts for a given time. Incoming alerts are checked to match the regular expression matches of an active silence. If they match, no notifications will be sent out for that alert.High availability: Telecom operators will not load balance traffic between Prometheus and all its Alert Managers; instead, they will point Prometheus to a list of all Alert Managers. Dashboard Visualization Grafana dashboard visualizes the Alert Manager webhook traffic notifications as shown below: Configuration YAMLs (Yet Another Markup Language) Telecom Operators can install and run Prometheus using the configuration below: YAML prometheus: enabled: true route: enabled: {} nameOverride: Prometheus tls: enabled: true certificatesSecret: backstage-prometheus-certs certFilename: tls.crt certKeyFilename: tls.key volumePermissions: enabled: true initdbScriptsSecret: backstage-prometheus-initdb prometheusSpec: retention: 3d replicas: 2 prometheusExternalLabelName: prometheus_cluster image: repository: <5G operator image repository for Prometheus> tag: <Version example v2.39.1> sha: "" podAntiAffinity: "hard" securityContext: null resources: limits: cpu: 1 memory: 2Gi requests: cpu: 500m memory: 1Gi serviceMonitorNamespaceSelector: matchExpressions: - {key: namespace, operator: In, values: [<Network function 1 namespace>, <Network function 2 namespace>]} serviceMonitorSelectorNilUsesHelmValues: false podMonitorSelectorNilUsesHelmValues: false ruleSelectorNilUsesHelmValues: false Configuration to route scrape data segregated based on the namespace and route to Central Prometheus. Note: The below configuration can be appended to the Prometheus mentioned in the above installation YAML. YAML remoteWrite: - url: <Central Prometheus URL for namespace 1 by 5G operator> basicAuth: username: name: <secret username for namespace 1> key: username password: name: <secret password for namespace 1> key: password tlsConfig: insecureSkipVerify: true writeRelabelConfigs: - sourceLabels: - namespace regex: <namespace 1> action: keep - url: <Central Prometheus URL for namespace 2 by 5G operator> basicAuth: username: name: <secret username for namespace 2> key: username password: name: <secret password for namespace 2> key: password tlsConfig: insecureSkipVerify: true writeRelabelConfigs: - sourceLabels: - namespace regex: <namespace 2> action: keep Telecom Operators can install and run Grafana using the configuration below. YAML grafana: replicas: 2 affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app.kubernetes.io/name" operator: In values: - Grafana topologyKey: "kubernetes.io/hostname" securityContext: false rbac: pspEnabled: false # Must be disabled due to tenant permissions namespaced: true adminPassword: admin image: repository: <artifactory>/Grafana tag: <version> sha: "" pullPolicy: IfNotPresent persistence: enabled: false initChownData: enabled: false sidecar: image: repository: <artifactory>/k8s-sidecar tag: <version> sha: "" imagePullPolicy: IfNotPresent resources: limits: cpu: 100m memory: 100Mi requests: cpu: 50m memory: 50Mi dashboards: enabled: true label: grafana_dashboard labelValue: "Vendor name" datasources: enabled: true defaultDatasourceEnabled: false additionalDataSources: - name: Prometheus type: Prometheus url: http://<prometheus-operated>:9090 access: proxy isDefault: true jsonData: timeInterval: 30s resources: limits: cpu: 400m memory: 512Mi requests: cpu: 50m memory: 206Mi extraContainers: - name: oauth-proxy image: <artifactory>/origin-oauth-proxy:<version> imagePullPolicy: IfNotPresent ports: - name: proxy-web containerPort: 4181 args: - --https-address=:4181 - --provider=openshift # Service account name here must be "<Helm Release name>-grafana" - --openshift-service-account=monitoring-grafana - --upstream=http://localhost:3000 - --tls-cert=/etc/tls/private/tls.crt - --tls-key=/etc/tls/private/tls.key - --cookie-secret=SECRET - --pass-basic-auth=false resources: limits: cpu: 100m memory: 256Mi requests: cpu: 50m memory: 128Mi volumeMounts: - mountPath: /etc/tls/private name: grafana-tls extraContainerVolumes: - name: grafana-tls secret: secretName: grafana-tls serviceAccount: annotations: "serviceaccounts.openshift.io/oauth-redirecturi.first": https://[SPK exposed IP for Grafana] service: targetPort: 4181 annotations: service.alpha.openshift.io/serving-cert-secret-name: <secret> Telecom Operators can install and run Alert Manager using the configuration below. YAML alertmanager: enabled: true alertmanagerSpec: image: repository: prometheus/alertmanager tag: <version> replicas: 2 podAntiAffinity: hard securityContext: null resources: requests: cpu: 25m memory: 200Mi limits: cpu: 100m memory: 400Mi containers: - name: config-reloader resources: requests: cpu: 10m memory: 10Mi limits: cpu: 25m memory: 50Mi Configuration to route Prometheus Alert Manager data to the Operator's centralized webhook receiver. Note: The below configuration can be appended to the Alert Manager mentioned in the above installation YAML. YAML config: global: resolve_timeout: 5m route: group_by: ['alertname'] group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: 'null' routes: - receiver: '<Network function 1>' group_wait: 10s group_interval: 10s group_by: ['alertname','oid','action','time','geid','ip'] matchers: - namespace="<namespace 1>" - receiver: '<Network function 2>' group_wait: 10s group_interval: 10s group_by: ['alertname','oid','action','time','geid','ip'] matchers: - namespace="<namespace 2>" Conclusion The open-source OAM (Operation and Maintenance) tools Prometheus, Grafana, and Alert Manager can benefit 5G Telecom operators. Prometheus periodically captures all the status of monitored 5G Telecom network functions through the HTTP protocol, and any component can be connected to the monitoring as long as the 5G Telecom operator provides the corresponding HTTP interface. Prometheus and Grafana Agent gives the 5G Telecom operator control over the metrics the operator wants to report; once the data is in Grafana, it can be stored in a Grafana database as extra data redundancy. In conclusion, Prometheus allows 5G Telecom operators to improve their operations and offer better customer service. Adopting a unified monitoring and alert system like Prometheus is one way to achieve this.
In an era of heightened data privacy concerns, the development of local Large Language Model (LLM) applications provides an alternative to cloud-based solutions. Ollama offers one solution, enabling LLMs to be downloaded and used locally. In this article, we'll explore how to use Ollama with LangChain and SingleStore using a Jupyter Notebook. The notebook file used in this article is available on GitHub. Introduction We'll use a Virtual Machine running Ubuntu 22.04.2 as our test environment. An alternative would be to use venv. Create a SingleStoreDB Cloud Account A previous article showed the steps required to create a free SingleStore Cloud account. We'll use Ollama Demo Group as our Workspace Group Name and ollama-demo as our Workspace Name. We’ll make a note of our password and host name. For this article, we'll temporarily allow access from anywhere by configuring the firewall under Ollama Demo Group > Firewall. For production environments, firewall rules should be added to provide increased security. Create a Database In our SingleStore Cloud account, let's use the SQL Editor to create a new database. Call this ollama_demo, as follows: SQL CREATE DATABASE IF NOT EXISTS ollama_demo; Install Jupyter From the command line, we’ll install the classic Jupyter Notebook, as follows: Shell pip install notebook Install Ollama We'll install Ollama, as follows: Shell curl -fsSL https://ollama.com/install.sh | sh Environment Variable Using the password and host information we saved earlier, we’ll create an environment variable to point to our SingleStore instance, as follows: Shell export SINGLESTOREDB_URL="admin:<password>@<host>:3306/ollama_demo" Replace <password> and <host> with the values for your environment. Launch Jupyter We are now ready to work with Ollama and we’ll launch Jupyter: Shell jupyter notebook Fill Out the Notebook First, some packages: Shell !pip install langchain ollama --quiet --no-warn-script-location Next, we’ll import some libraries: Python import ollama from langchain_community.vectorstores import SingleStoreDB from langchain_community.vectorstores.utils import DistanceStrategy from langchain_core.documents import Document from langchain_community.embeddings import OllamaEmbeddings We'll create embeddings using all-minilm (45 MB at the time of writing): Python ollama.pull("all-minilm") Example output: Plain Text {'status': 'success'} For our LLM we'll use llama2 (3.8 GB at the time of writing): Python ollama.pull("llama2") Example output: Plain Text {'status': 'success'} Next, we’ll use the example text from the Ollama website: Python documents = [ "Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels", "Llamas were first domesticated and used as pack animals 4,000 to 5,000 years ago in the Peruvian highlands", "Llamas can grow as much as 6 feet tall though the average llama between 5 feet 6 inches and 5 feet 9 inches tall", "Llamas weigh between 280 and 450 pounds and can carry 25 to 30 percent of their body weight", "Llamas are vegetarians and have very efficient digestive systems", "Llamas live to be about 20 years old, though some only live for 15 years and others live to be 30 years old" ] embeddings = OllamaEmbeddings( model = "all-minilm", ) dimensions = len(embeddings.embed_query(documents[0])) docs = [Document(text) for text in documents] We’ll specify all-minilm for the embeddings, determine the number of dimensions returned for the first document, and convert the documents to the format required by SingleStore. Next, we’ll use LangChain: Python docsearch = SingleStoreDB.from_documents( docs, embeddings, table_name = "langchain_docs", distance_strategy = DistanceStrategy.EUCLIDEAN_DISTANCE, use_vector_index = True, vector_size = dimensions ) In addition to the documents and embeddings, we’ll provide the name of the table we want to use for storage, the distance strategy, that we want to use a vector index, and the vector size using the dimensions we previously determined. These and other options are explained in further detail in the LangChain documentation. Using the SQL Editor in SingleStore Cloud, let’s check the structure of the table created by LangChain: SQL USE ollama_demo; DESCRIBE langchain_docs; Example output: Plain Text +----------+------------------+------+------+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+------------------+------+------+---------+----------------+ | id | bigint(20) | NO | PRI | NULL | auto_increment | | content | longtext | YES | | NULL | | | vector | vector(384, F32) | NO | MUL | NULL | | | metadata | JSON | YES | | NULL | | +----------+------------------+------+------+---------+----------------+ We can see that a vector column with 384 dimensions was created for storing the embeddings. Let’s also quickly check the stored data: SQL USE ollama_demo; SELECT SUBSTRING(content, 1, 30) AS content, SUBSTRING(vector, 1, 30) AS vector FROM langchain_docs; Example output: Plain Text +--------------------------------+--------------------------------+ | content | vector | +--------------------------------+--------------------------------+ | Llamas weigh between 280 and 4 | [0.235754818,0.242168128,-0.26 | | Llamas were first domesticated | [0.153105229,0.219774529,-0.20 | | Llamas are vegetarians and hav | [0.285528302,0.10461951,-0.313 | | Llamas are members of the came | [-0.0174482632,0.173883006,-0. | | Llamas can grow as much as 6 f | [-0.0232818555,0.122274697,-0. | | Llamas live to be about 20 yea | [0.0260244086,0.212311044,0.03 | +--------------------------------+--------------------------------+ Finally, let’s check the vector index: SQL USE ollama_demo; SHOW INDEX FROM langchain_docs; Example output: Plain Text +----------------+------------+------------+--------------+-------------+-----------+-------------+----------+--------+------+------------------+---------+---------------+---------------------------------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Index_options | +----------------+------------+------------+--------------+-------------+-----------+-------------+----------+--------+------+------------------+---------+---------------+---------------------------------------+ | langchain_docs | 0 | PRIMARY | 1 | id | NULL | NULL | NULL | NULL | | COLUMNSTORE HASH | | | | | langchain_docs | 1 | vector | 1 | vector | NULL | NULL | NULL | NULL | | VECTOR | | | {"metric_type": "EUCLIDEAN_DISTANCE"} | | langchain_docs | 1 | __SHARDKEY | 1 | id | NULL | NULL | NULL | NULL | | METADATA_ONLY | | | | +----------------+------------+------------+--------------+-------------+-----------+-------------+----------+--------+------+------------------+---------+---------------+---------------------------------------+ We’ll now ask a question, as follows: Python prompt = "What animals are llamas related to?" docs = docsearch.similarity_search(prompt) data = docs[0].page_content print(data) Example output: Plain Text Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels Next, we’ll use the LLM, as follows: Python output = ollama.generate( model = "llama2", prompt = f"Using this data: {data}. Respond to this prompt: {prompt}" ) print(output["response"]) Example output: Plain Text Llamas are members of the camelid family, which means they are closely related to other animals such as: 1. Vicuñas: Vicuñas are small, wild relatives of llamas and alpacas. They are native to South America and are known for their soft, woolly coats. 2. Camels: Camels are also members of the camelid family and are known for their distinctive humps on their backs. There are two species of camel: the dromedary and the Bactrian. 3. Alpacas: Alpacas are domesticated animals that are closely related to llamas and vicuñas. They are native to South America and are known for their soft, luxurious fur. So, in summary, llamas are related to vicuñas, camels, and alpacas. Summary In this article, we’ve seen that we can connect to SingleStore, store the documents and embeddings, ask questions about the data in the database, and use the power of LLMs locally through Ollama.
Cross-Origin Resource Sharing (CORS) often becomes a stumbling block for developers attempting to interact with APIs hosted on different domains. The challenge intensifies when direct server configuration isn't an option, pushing developers towards alternative solutions like the widely-used cors-anywhere. However, less known is the capability of NGINX's proxy_pass directive to handle not only local domains and upstreams but also external sources, for example: This is how the idea was born to write a universal (with some reservations) NIGNX config that supports any given domain. Understanding the Basics and Setup CORS is a security feature that restricts web applications from making requests to a different domain than the one that served the web application itself. This is a crucial security measure to prevent malicious websites from accessing sensitive data. However, when legitimate cross-domain requests are necessary, properly configuring CORS is essential. The NGINX proxy server offers a powerful solution to this dilemma. By utilizing NGINX's flexible configuration system, developers can create a proxy that handles CORS preflight requests and manipulates headers to ensure compliance with CORS policies. Here's how: Variable Declaration and Manipulation With the map directive, NGINX allows the declaration of new variables based on existing global ones, incorporating regular expression support for dynamic processing. For instance, extracting a specific path from a URL can be achieved, allowing for precise control over request handling. Thus, when requesting http://example.com/api, the $my_request_path variable will contain api. Header Management NGINX facilitates the addition of custom headers to responses via add_header and to proxied requests through proxy_set_header. Simultaneously, proxy_hide_header can be used to conceal headers received from the proxied server, ensuring only the necessary information is passed back to the client. We now have an X-Request-Path header with api. Conditional Processing Utilizing the if directive, NGINX can perform actions based on specific conditions, such as returning a predetermined response code for OPTIONS method requests, streamlining the handling of CORS preflight checks. Putting It All Together First, let’s declare $proxy_uri that we will extract from $request_uri: In short, it works like this: when requesting http://example.com/example.com, the $proxy_uri variable will contain https://example.com. From the resulting $proxy_uri, extract the part that will match the Origin header: For the Forwarded header, we need to process 2 variables at once: The processed X-Forwarded-For header is already built into NGINX. Now we can move on to declaring our proxy server: We get a minimally working proxy server, which can process the CORS Preflight Request and add the appropriate headers. Enhancing Security and Performance Beyond basic setup, further refinements can improve security and performance: Hiding CORS Headers When NGINX handles CORS internally, it's beneficial to hide these headers from client responses to prevent exposure of server internals. Rate Limit Bypassing It would also be nice to pass the client’s IP to somehow bypass the rate limit, which can occur if several users access the same resource. Disabling Caching And finally, for dynamic content or sensitive information, disabling caching is a best practice, ensuring data freshness and privacy. Conclusion This guide not only demystifies the process of configuring NGINX to handle CORS requests but also equips developers with the knowledge to create a robust, flexible proxy server capable of supporting diverse application needs. Through careful configuration and understanding of both CORS policies and NGINX's capabilities, developers can overcome cross-origin restrictions, enhance application performance, and ensure data security. This advanced understanding and application of NGINX not only solves a common web development hurdle but also showcases the depth of skill and innovation possible when navigating web security and resource-sharing challenges.
Just as you can plug in a toaster, and add bread... You can plug this API Appliance into your database, and add rules and Python. Automation can provide: Remarkable agility and simplicityWith all the flexibility of a framework Using conventional frameworks, creating a modern, API-based web app is a formidable undertaking. It might require several weeks and extensive knowledge of a framework. In this article, we'll use API Logic Server (open source, available here) to create it in minutes, instead of weeks or months. And, we'll show how it can be done with virtually zero knowledge of frameworks, or even Python. We'll even show how to add message-based integration. 1. Plug It Into Your Database Here's how you plug the ApiLogicServer appliance into your database: $ ApiLogicServer create-and-run --project-name=sample_ai --db-url=sqlite:///sample_ai.sqlite No database? Create one with AI, as described in the article, "AI and Rules for Agile Microservices in Minutes." It Runs: Admin App and API Instantly, you have a running system as shown on the split-screen below: A multi-page Admin App (shown on the left), supported by...A multi-table JSON:API with Swagger (shown on the right) So right out of the box, you can support: Custom client app devAd hoc application integrationAgile collaboration, based on working software Instead of weeks of complex and time-consuming framework coding, you have working software, now. Containerize API Logic Server can run as a container or a standard pip install. In either case, scripts are provided to containerize your project for deployment, e.g., to the cloud. 2. Add Rules for Logic Instant working software is great: one command instead of weeks of work, and virtually zero knowledge required. But without logic enforcement, it's little more than a cool demo. Behind the running application is a standard project. Open it with your IDE, and: Declare logic with code completion.Debug it with your debugger. Instead of conventional procedural logic, the code above is declarative. Like a spreadsheet, you declare rules for multi-table derivations and constraints. The rules handle all the database access, dependencies, and ordering. The results are quite remarkable: The 5 spreadsheet-like rules above perform the same logic as 200 lines of Python.The backend half of your system is 40X more concise. Similar rules are provided for granting row-level access, based on user roles. 3. Add Python for Flexibility Automation and rules provide remarkable agility with very little in-depth knowledge required. However, automation always has its limits: you need flexibility to deliver a complete result. For flexibility, the appliance enables you to use Python and popular packages to complete the job. Below, we customize for pricing discounts and sending Kafka messages: Extensible Declarative Automation The screenshots above illustrate remarkable agility. This system might have taken weeks or months using conventional frameworks. But it's more than agility. The level of abstraction here is very high, bringing a level of simplicity that empowers you to create microservices - even if you are new to Python or frameworks such as Flask and SQLAlchemy. There are 3 key elements that deliver this speed and simplicity: Microservice automation: Instead of slow and complex framework coding, just plug into your database for an instant API and Admin App.Logic automation with declarative rules: Instead of tedious code that describes how logic operates, rules express what you want to accomplish.Extensibility: Finish the remaining elements with your IDE, Python, and standard packages such as Flask and SQLAlchemy. This automation appliance can provide remarkable benefits, empowering more people, to do more.
In modern application development, delivering personalized and controlled user experiences is paramount. This necessitates the ability to toggle features dynamically, enabling developers to adapt their applications in response to changing user needs and preferences. Feature flags, also known as feature toggles, have emerged as a critical tool in achieving this flexibility. These flags empower developers to activate or deactivate specific functionalities based on various criteria such as user access, geographic location, or user behavior. React, a popular JavaScript framework known for its component-based architecture, is widely adopted in building user interfaces. Given its modular nature, React applications are particularly well-suited for integrating feature flags seamlessly. In this guide, we'll explore how to integrate feature flags into your React applications using IBM App Configuration, a robust platform designed to manage application features and configurations. By leveraging feature flags and IBM App Configuration, developers can unlock enhanced flexibility and control in their development process, ultimately delivering tailored user experiences with ease. IBM App Configuration can be integrated with any framework be it React, Angular, Java, Go, etc. React is a popular JavaScript framework that uses a component-based architecture, allowing developers to build reusable and modular UI components. This makes it easier to manage complex user interfaces by breaking them down into smaller, self-contained units. Adding feature flags to React components will make it easier for us to handle the components. Integrating With IBM App Configuration IBM App Configuration provides a comprehensive platform for managing feature flags, environments, collections, segments, and more. Before delving into the tutorial, it's important to understand why integrating your React application with IBM App Configuration is necessary and what benefits it offers. By integrating with IBM App Configuration, developers gain the ability to dynamically toggle features on and off within their applications. This capability is crucial for modern application development, as it allows developers to deliver controlled and personalized user experiences. With feature flags, developers can activate or deactivate specific functionalities based on factors such as user access, geographic location, or user preferences. This not only enhances user experiences but also provides developers with greater flexibility and control over feature deployments. Additionally, IBM App Configuration offers segments for targeted rollouts, enabling developers to gradually release features to specific groups of users. Overall, integrating with IBM App Configuration empowers developers to adapt their applications' behavior in real time, improving agility, and enhancing user satisfaction. To begin integrating your React application with App Configuration, follow these steps: 1. Create an Instance Start by creating an instance of IBM App Configuration on cloud.ibm.com. Within the instance, create an environment, such as Dev, to manage your configurations. Now create a collection. Creating collections comes in handy when there are multiple feature flags created for various projects. Each project can have a collection in the same App Configuration instance and you can tag these feature flags to the collection to which they belong. 2. Generate Credentials Access the service credentials section and generate new credentials. These credentials will be required to authenticate your React application with App Configuration. 3. Install SDK In your React application, install the IBM App Configuration React SDK using npm: CSS npm i ibm-appconfiguration-react-client-sdk 4. Configure Provider In your index.js or App.js, wrap your application component with AppConfigProvider to enable AppConfig within your React app. The Provider must be wrapped at the main level of the application, to ensure the entire application has access. The AppConfigProvider requires various parameters as shown in the screenshot below. All of these values can be found in the credentials created. 5. Access Feature Flags Now, within your App Configuration instance, create feature flags to control specific functionalities. Copy the feature flag ID for further integration into your code. Integrating Feature Flags Into React Components Once you've set up the AppConfig in your React application, you can seamlessly integrate feature flags into your components. Enable Components Dynamically Use the feature flag ID copied from the App Configuration instance to toggle specific components based on the flag's status. This allows you to enable or disable features dynamically without redeploying your application. Utilizing Segments for Targeted Rollouts IBM App Configuration offers segments to target specific groups of users, enabling personalized experiences and controlled rollouts. Here's how to leverage segments effectively: Define Segments Create segments based on user properties, behaviors, or other criteria to target specific user groups. Rollout Percentage Adjust the rollout percentage to control the percentage of users who receive the feature within a targeted segment. This enables gradual rollouts or A/B testing scenarios. Example If the rollout percentage is set to 100% and a particular segment is targeted, then the feature is rolled out to all the users in that particular segment.If the rollout percentage is set between 1% to 99% and the rollout percentage is 60%, for example, and a particular segment is targeted, then the feature is rolled out to randomly 60% of the users in that particular segment.If the rollout percentage is set to 0% and a particular segment is targeted, then the feature is rolled out to none of the users in that particular segment. Conclusion Integrating feature flags with IBM App Configuration empowers React developers to implement dynamic feature toggling and targeted rollouts seamlessly. By leveraging feature flags and segments, developers can deliver personalized user experiences while maintaining control over feature deployments. Start integrating feature flags into your React applications today to unlock enhanced flexibility and control in your development process.
In the dynamic landscape of microservices, managing communication and ensuring robust security and observability becomes a Herculean task. This is where Istio, a revolutionary service mesh, steps in, offering an elegant solution to these challenges. This article delves deep into the essence of Istio, illustrating its pivotal role in a Kubernetes (KIND) based environment, and guides you through a Helm-based installation process, ensuring a comprehensive understanding of Istio's capabilities and its impact on microservices architecture. Introduction to Istio Istio is an open-source service mesh that provides a uniform way to secure, connect, and monitor microservices. It simplifies configuration and management, offering powerful tools to handle traffic flows between services, enforce policies, and aggregate telemetry data, all without requiring changes to microservice code. Why Istio? In a microservices ecosystem, each service may be developed in different programming languages, have different versions, and require unique communication protocols. Istio provides a layer of infrastructure that abstracts these differences, enabling services to communicate with each other seamlessly. It introduces capabilities like: Traffic management: Advanced routing, load balancing, and fault injectionSecurity: Robust ACLs, RBAC, and mutual TLS to ensure secure service-to-service communicationObservability: Detailed metrics, logs, and traces for monitoring and troubleshooting Setting Up a KIND-Based Kubernetes Cluster Before diving into Istio, let's set up a Kubernetes cluster using KIND (Kubernetes IN Docker), a tool for running local Kubernetes clusters using Docker container "nodes." KIND is particularly suited for development and testing purposes. # Install KIND curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-$(uname)-amd64 chmod +x ./kind mv ./kind /usr/local/bin/kind # Create a cluster kind create cluster --name istio-demo This code snippet installs KIND and creates a new Kubernetes cluster named istio-demo. Ensure Docker is installed and running on your machine before executing these commands. Helm-Based Installation of Istio Helm, the package manager for Kubernetes, simplifies the deployment of complex applications. We'll use Helm to install Istio on our KIND cluster. 1. Install Helm First, ensure Helm is installed on your system: curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh 2. Add the Istio Helm Repository Add the Istio release repository to Helm: helm repo add istio https://istio-release.storage.googleapis.com/charts helm repo update 3. Install Istio Using Helm Now, let's install the Istio base chart, the istiod service, and the Istio Ingress Gateway: # Install the Istio base chart helm install istio-base istio/base -n istio-system --create-namespace # Install the Istiod service helm install istiod istio/istiod -n istio-system --wait # Install the Istio Ingress Gateway helm install istio-ingress istio/gateway -n istio-system This sequence of commands sets up Istio on your Kubernetes cluster, creating a powerful platform for managing your microservices. To enable the Istio injection for the target namespace, use the following command. kubectl label namespace default istio-injection=enabled Exploring Istio's Features To demonstrate Istio's powerful capabilities in a microservices environment, let's use a practical example involving a Kubernetes cluster with Istio installed, and deploy a simple weather application. This application, running in a Docker container brainupgrade/weather-py, serves weather information. We'll illustrate how Istio can be utilized for traffic management, specifically demonstrating a canary release strategy, which is a method to roll out updates gradually to a small subset of users before rolling it out to the entire infrastructure. Step 1: Deploy the Weather Application First, let's deploy the initial version of our weather application using Kubernetes. We will deploy two versions of the application to simulate a canary release. Create a Kubernetes Deployment and Service for the weather application: apiVersion: apps/v1 kind: Deployment metadata: name: weather-v1 spec: replicas: 2 selector: matchLabels: app: weather version: v1 template: metadata: labels: app: weather version: v1 spec: containers: - name: weather image: brainupgrade/weather-py:v1 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: weather-service spec: ports: - port: 80 name: http selector: app: weather Apply this configuration with kubectl apply -f <file-name>.yaml. Step 2: Enable Traffic Management With Istio Now, let's use Istio to manage traffic to our weather application. We'll start by deploying a Gateway and a VirtualService to expose our application. apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: weather-gateway spec: selector: istio: ingress servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: weather spec: hosts: - "*" gateways: - weather-gateway http: - route: - destination: host: weather-service port: number: 80 This setup routes all traffic through the Istio Ingress Gateway to our weather-service. Step 3: Implementing Canary Release Let's assume we have a new version (v2) of our weather application that we want to roll out gradually. We'll adjust our Istio VirtualService to route a small percentage of the traffic to the new version. 1. Deploy version 2 of the weather application: apiVersion: apps/v1 kind: Deployment metadata: name: weather-v2 spec: replicas: 1 selector: matchLabels: app: weather version: v2 template: metadata: labels: app: weather version: v2 spec: containers: - name: weather image: brainupgrade/weather-py:v2 ports: - containerPort: 80 2. Adjust the Istio VirtualService to split traffic between v1 and v2: apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: weather spec: hosts: - "*" gateways: - weather-gateway http: - match: - uri: prefix: "/" route: - destination: host: weather-service port: number: 80 subset: v1 weight: 90 - destination: host: weather-service port: number: 80 subset: v2 weight: 10 This configuration routes 90% of the traffic to version 1 of the application and 10% to version 2, implementing a basic canary release. Also, enable the DestinationRule as well. See the following: apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: weather-service namespace: default spec: host: weather-service subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 This example illustrates how Istio enables sophisticated traffic management strategies like canary releases in a microservices environment. By leveraging Istio, developers can ensure that new versions of their applications are gradually and safely exposed to users, minimizing the risk of introducing issues. Istio's service mesh architecture provides a powerful toolset for managing microservices, enhancing both the reliability and flexibility of application deployments. Istio and Kubernetes Services Istio and Kubernetes Services are both crucial components in the cloud-native ecosystem, but they serve different purposes and operate at different layers of the stack. Understanding how Istio differs from Kubernetes Services is essential for architects and developers looking to build robust, scalable, and secure microservices architectures. Kubernetes Services Kubernetes Services are a fundamental part of Kubernetes, providing an abstract way to expose an application running on a set of Pods as a network service. With Kubernetes Services, you can utilize the following: Discoverability: Assign a stable IP address and DNS name to a group of Pods, making them discoverable within the cluster.Load balancing: Distribute network traffic or requests among the Pods that constitute a service, improving application scalability and availability.Abstraction: Decouple the front-end service from the back-end workloads, allowing back-end Pods to be replaced or scaled without reconfiguring the front-end clients. Kubernetes Services focuses on internal cluster communication, load balancing, and service discovery. They operate at the L4 (TCP/UDP) layer, primarily dealing with IP addresses and ports. Istio Services Istio, on the other hand, extends the capabilities of Kubernetes Services by providing a comprehensive service mesh that operates at a higher level. It is designed to manage, secure, and observe microservices interactions across different environments. Istio's features include: Advanced traffic management: Beyond simple load balancing, Istio offers fine-grained control over traffic with rich routing rules, retries, failovers, and fault injection. It operates at L7 (HTTP/HTTPS/GRPC), allowing behavior to be controlled based on HTTP headers and URLs.Security: Istio provides end-to-end security, including strong identity-based authentication and authorization between services, transparently encrypting communication with mutual TLS, without requiring changes to application code.Observability: It offers detailed insights into the behavior of the microservices, including automatic metrics, logs, and traces for all traffic within a cluster, regardless of the service language or framework.Policy enforcement: Istio allows administrators to enforce policies across the service mesh, ensuring compliance with security, auditing, and operational policies. Key Differences Scope and Layer Kubernetes Services operates at the infrastructure layer, focusing on L4 (TCP/UDP) for service discovery and load balancing. Istio operates at the application layer, providing L7 (HTTP/HTTPS/GRPC) traffic management, security, and observability features. Capabilities While Kubernetes Services provides basic load balancing and service discovery, Istio offers advanced traffic management (like canary deployments and circuit breakers), secure service-to-service communication (with mutual TLS), and detailed observability (tracing, monitoring, and logging). Implementation and Overhead Kubernetes Services are integral to Kubernetes and require no additional installation. Istio, being a service mesh, is an add-on layer that introduces additional components (like Envoy sidecar proxies) into the application pods, which can add overhead but also provide enhanced control and visibility. Kubernetes Services and Istio complement each other in the cloud-native ecosystem. Kubernetes Services provides the basic necessary functionality for service discovery and load balancing within a Kubernetes cluster. Istio extends these capabilities, adding advanced traffic management, enhanced security features, and observability into microservices communications. For applications requiring fine-grained control over traffic, secure communication, and deep observability, integrating Istio with Kubernetes offers a powerful platform for managing complex microservices architectures. Conclusion Istio stands out as a transformative force in the realm of microservices, providing a comprehensive toolkit for managing the complexities of service-to-service communication in a cloud-native environment. By leveraging Istio, developers and architects can significantly streamline their operational processes, ensuring a robust, secure, and observable microservices architecture. Incorporating Istio into your microservices strategy not only simplifies operational challenges but also paves the way for innovative service management techniques. As we continue to explore and harness the capabilities of service meshes like Istio, the future of microservices looks promising, characterized by enhanced efficiency, security, and scalability.
In the world of modern web development, security is paramount. With the rise of sophisticated cyber threats, developers need robust tools and frameworks to build secure applications. Deno, a secure runtime for JavaScript and TypeScript, has emerged as a promising solution for developers looking to enhance the security of their applications. Deno was created by Ryan Dahl, the original creator of Node.js, with a focus on addressing some of the security issues present in Node.js. Deno comes with several built-in security features that make it a compelling choice for developers concerned about application security. This guide will explore some of the key security features of Deno and how they can help you build trustworthy applications. Deno’s "Secure by Default" Features Deno achieves "Secure by Default" through several key design choices and built-in features: No file, network, or environment access by default: Unlike Node.js, which grants access to the file system, network, and environment variables by default, Deno restricts these permissions unless explicitly granted. This reduces the attack surface of applications running in Deno.Explicit permissions: Deno requires explicit permissions for accessing files, networks, and other resources, which are granted through command-line flags or configuration files. This helps developers understand and control the permissions their applications have.Built-in security features: Deno includes several built-in security features, such as a secure runtime environment (using V8 and Rust), automatic updates, and a dependency inspector to identify potentially unsafe dependencies.Secure standard library: Deno provides a secure standard library for common tasks, such as file I/O, networking, and cryptography, which is designed with security best practices in mind.Sandboxed execution: Deno uses V8's built-in sandboxing features to isolate the execution of JavaScript and TypeScript code, preventing it from accessing sensitive resources or interfering with other applications.No access to critical system resources: Deno does not have access to critical system resources, such as the registry (Windows) or keychain (macOS), further reducing the risk of security vulnerabilities. Overall, Deno's "Secure by Default" approach aims to provide developers with a safer environment for building applications, helping to mitigate common security risks associated with JavaScript and TypeScript development. Comparison of “Secure by Default” With Node.js Deno takes a more proactive approach to security by restricting access to resources by default and requiring explicit permissions for access. It also includes built-in security features and a secure standard library, making it more secure by default compared to Node.js. Feature Deno Node.js File accessDenied by default, requires explicit permissionAllowed by defaultNetwork accessDenied by default, requires explicit permissionAllowed by defaultEnvironment accessDenied by default, requires explicit permissionAllowed by defaultPermissions systemUses command-line flags or configuration filesRequires setting environment variablesBuilt-in securityIncludes built-in security featuresLacks comprehensive built-in securityStandard librarySecure standard libraryStandard library with potential vulnerabilitiesSandboxed executionUses V8's sandboxing featuresLacks built-in sandboxing featuresAccess to resourcesRestricted access to critical system resourcesMay have access to critical system resources Permission Model Deno's permission model is central to its "Secure by Default" approach. Here's how it works: No implicit permissions: In Deno, access to resources like the file system, network, and environment variables is denied by default. This means that even if a script tries to access these resources, it will be blocked unless the user explicitly grants permission.Explicit permission requests: When a Deno script attempts to access a resource that requires permission, such as reading a file or making a network request, Deno will throw an error indicating that permission is required. The script must then be run again with the appropriate command-line flag (--allow-read, --allow-net, etc.) to grant the necessary permission.Fine-grained permissions: Deno's permission system is designed to be fine-grained, allowing developers to grant specific permissions for different operations. For example, a script might be granted permission to read files but not write them, or to access a specific network address but not others.Scoped permissions: Permissions in Deno are scoped to the script's URL. This means that if a script is granted permission to access a resource, it can only access that specific resource and not others. This helps prevent scripts from accessing resources they shouldn't have access to.Permissions prompt: When a script requests permission for the first time, Deno will prompt the user to grant or deny permission. This helps ensure that the user is aware of the permissions being requested and can make an informed decision about whether to grant them. Overall, Deno's permission model is designed to give developers fine-grained control over the resources their scripts can access, while also ensuring that access is only granted when explicitly requested and authorized by the user. This helps prevent unauthorized access to sensitive resources and contributes to Deno's "Secure by Default" approach. Sandboxing Sandboxing in Deno helps achieve "secure by default" by isolating the execution of JavaScript and TypeScript code within a restricted environment. This isolation prevents code from accessing sensitive resources or interfering with other applications, enhancing the security of the runtime. Here's how sandboxing helps in Deno: Isolation: Sandboxing in Deno uses V8's built-in sandboxing features to create a secure environment for executing code. This isolation ensures that code running in Deno cannot access resources outside of its sandbox, such as the file system or network, without explicit permission.Prevention of malicious behavior: By isolating code in a sandbox, Deno can prevent malicious code from causing harm to the system or other applications. Even if a piece of code is compromised, it is limited in its ability to access sensitive resources or perform malicious actions.Enhanced security: Sandboxing helps enhance the overall security of Deno by reducing the attack surface available to potential attackers. It adds an additional layer of protection against common security vulnerabilities, such as arbitrary code execution or privilege escalation.Controlled access to resources: Sandboxing allows Deno to control access to resources by requiring explicit permissions for certain actions. This helps ensure that applications only access resources they are authorized to access, reducing the risk of unauthorized access. Overall, sandboxing plays a crucial role in Deno's "secure by default" approach by providing a secure environment for executing code and preventing malicious behavior. It helps enhance the security of applications running in Deno by limiting their access to resources and reducing the impact of potential security vulnerabilities. Secure Runtime APIs Deno's secure runtime APIs provide a robust foundation for building secure applications by default. With features such as sandboxed execution, explicit permission requests, and controlled access to critical resources, Deno ensures that applications run in a secure environment. Sandboxed execution isolates code, preventing it from accessing sensitive resources or interfering with other applications. Deno's permission model requires explicit permission requests for accessing resources like the file system, network, and environment variables, reducing the risk of unintended or malicious access. Additionally, Deno's secure runtime APIs do not have access to critical system resources, further enhancing security. Overall, Deno's secure runtime APIs help developers build secure applications from the ground up, making security a core part of the development process. Implement Secure Runtime APIs Implementing secure runtime APIs in Deno involves using Deno's built-in features and following best practices for secure coding. Here's how you can implement secure-by-default behavior in Deno with examples: Explicitly request permissions: Use Deno's permission model to explicitly request access to resources. For example, to read from a file, you would use the --allow-read flag: TypeScript const file = await Deno.open("example.txt"); // Read from the file... Deno.close(file.rid); Avoid insecure features: Instead of using Node.js-style child_process for executing shell commands, use Deno's Deno.run API, which is designed to be more secure: TypeScript const process = Deno.run({ cmd: ["echo", "Hello, Deno!"], }); await process.status(); Enable secure flag for import maps: When using import maps, ensure the secure flag is enabled to restrict imports to HTTPS URLs only: JSON { "imports": { "example": "https://example.com/module.ts" }, "secure": true } Use HTTPS for network requests: Always use HTTPS for network requests. Deno's fetch API supports HTTPS by default: TypeScript const response = await fetch("https://example.com/data.json"); const data = await response.json(); Update dependencies regularly: Use Deno's built-in security audits to identify and update dependencies with known vulnerabilities: Shell deno audit Enable secure runtime features: Take advantage of Deno's secure runtime features, such as automatic updates and dependency inspection, to enhance the security of your application.Implement secure coding practices: Follow secure coding practices, such as input validation and proper error handling, to minimize security risks in your code. Managing Dependencies To Reduce Security Risks To reduce security risks associated with dependencies, consider the following recommendations: Regularly update dependencies: Regularly update your dependencies to the latest versions, as newer versions often include security patches and bug fixes. Use tools like deno audit to identify and update dependencies with known vulnerabilities.Use semantic versioning: Follow semantic versioning (SemVer) for your dependencies and specify version ranges carefully in your deps.ts file to ensure that you receive bug fixes and security patches without breaking changes.Limit dependency scope: Only include dependencies that are necessary for your project's functionality. Avoid including unnecessary or unused dependencies, as they can introduce additional security risks.Use import maps: Use import maps to explicitly specify the mapping between module specifiers and URLs. This helps prevent the use of malicious or insecure dependencies by controlling which dependencies are used in your application.Check dependency health: Regularly check the health of your dependencies using tools like `deno doctor` or third-party services. Look for dependencies with known vulnerabilities or that are no longer actively maintained.Use dependency analysis tools: Use dependency analysis tools to identify and remove unused dependencies, as well as to detect and fix vulnerabilities in your dependencies.Review third-party code: When using third-party dependencies, review the source code and documentation to ensure that they meet your security standards. Consider using dependencies from reputable sources or well-known developers.Monitor for security vulnerabilities: Monitor security advisories and mailing lists for your dependencies to stay informed about potential security vulnerabilities. Consider using automated tools to monitor for vulnerabilities in your dependencies.Consider security frameworks: Consider using security frameworks and libraries that provide additional security features, such as input validation, authentication, and encryption, to enhance the security of your application.Implement secure coding practices: Follow secure coding practices to minimize security risks in your code, such as input validation, proper error handling, and using secure algorithms for cryptography. Secure Coding Best Practices Secure coding practices in Deno are similar to those in other programming languages but are adapted to Deno's unique features and security model. Here are some best practices for secure coding in Deno: Use explicit permissions: Always use explicit permissions when accessing resources like the file system, network, or environment variables. Use the --allow-read, --allow-write, --allow-net, and other flags to grant permissions only when necessary.Avoid using unsafe APIs: Deno provides secure alternatives to some Node.js APIs that are considered unsafe, such as the child_process module. Use Deno's secure APIs instead.Sanitize input: Always sanitize user input to prevent attacks like SQL injection, XSS, and command injection. Use libraries like std/encoding/html to encode HTML entities and prevent XSS attacks.Use HTTPS: Always use HTTPS for network communication to ensure data integrity and confidentiality. Deno's fetch API supports HTTPS by default.Validate dependencies: Regularly audit and update your dependencies to ensure they are secure. Use Deno's built-in audit tools to identify and mitigate vulnerabilities in your dependencies.Use secure standard library: Deno's standard library (std) provides secure implementations of common functionality. Use these modules instead of relying on third-party libraries with potential vulnerabilities.Avoid eval: Avoid using eval or similar functions, as they can introduce security vulnerabilities by executing arbitrary code. Use alternative approaches, such as functions and modules, to achieve the desired functionality.Minimize dependencies: Minimize the number of dependencies in your project to reduce the attack surface. Only include dependencies that are necessary for your application's functionality.Regularly update Deno: Keep Deno up to date with the latest security patches and updates to mitigate potential vulnerabilities in the runtime.Enable secure flags: When using import maps, enable the secure flag to restrict imports to HTTPS URLs only, preventing potential security risks associated with HTTP imports. Conclusion Deno's design philosophy, which emphasizes security and simplicity, makes it an ideal choice for developers looking to build secure applications. Deno's permission model and sandboxing features ensure that applications have access only to the resources they need, reducing the risk of unauthorized access and data breaches. Additionally, Deno's secure runtime APIs provide developers with tools to implement encryption, authentication, and other security measures effectively. By leveraging Deno's security features, developers can build applications that are not only secure but also reliable and trustworthy. Deno's emphasis on security from the ground up helps developers mitigate common security risks and build applications that users can trust. As we continue to rely more on digital technologies, the importance of building trustworthy applications cannot be overstated, and Deno provides developers with the tools they need to meet this challenge head-on.
Mark Gardner
Independent Contractor,
The Perl Shop
Nuwan Dias
VP and Deputy CTO,
WSO2
Radivoje Ostojic
Principal Software Engineer,
BrightMarbles
Adam Houghton
Senior Software Developer,
SAS Institute