The 9.2 release of Komodo IDE is live and includes new features such as Docker and Vagrant integration, collaboration improvements, a package installer, and UI changes.
Learn how to use the service registration and discovery services in ZooKeeper to manage microservices when refactoring from an existing monolithic application.
Liquibase is a great tool, and comparable to Git for databases. While it might not technically be a source control system, it's packed with similar functionality.
Distributed transaction tracing is a useful way to monitor or evaluate your microservices architecture for performance, particularly when measuring end-to-end requests.
In this article, excerpted from the book Docker in Action, I will show you how to open access to shared memory between containers. Linux provides a few tools for sharing memory between processes running on the same computer. This form of inter-process communication (IPC) performs at memory speeds. It is often used when the latency associated with network or pipe based IPC drags software performance below requirements. The best examples of shared memory based IPC usage is in scientific computing and some popular database technologies like PostgreSQL. Docker creates a unique IPC namespace for each container by default. The Linux IPC namespace partitions shared memory primitives like named shared memory blocks and semaphores, as well as message queues. It is okay if you are not sure what these are. Just know that they are tools used by Linux programs to coordinate processing. The IPC namespace prevents processes in one container from accessing the memory on the host or in other containers. Sharing IPC Primitives Between Containers I’ve created an image named allingeek/ch6_ipc that contains both a producer and consumer. They communicate using shared memory. Listing 1 will help you understand the problem with running these in separate containers. Listing 1: Launch a Communicating Pair of Programs # start a producer docker -d -u nobody --name ch6_ipc_producer \ allingeek/ch6_ipc -producer # start the consumer docker -d -u nobody --name ch6_ipc_consumer \ allingeek/ch6_ipc -consumer Listing 1 starts two containers. The first creates a message queue and starts broadcasting messages on it. The second should pull from the message queue and write the messages to the logs. You can see what each is doing by using the following commands to inspect the logs of each: docker logs ch6_ipc_producer docker logs ch6_ipc_consumer If you executed the commands in Listing 1 something should be wrong. The consumer never sees any messages on the queue. Each process used the same key to identify the shared memory resource but they referred to different memory. The reason is that each container has its own shared memory namespace. If you need to run programs that communicate with shared memory in different containers, then you will need to join their IPC namespaces with the --ipc flag. The --ipc flag has a container mode that will create a new container in the same IPC namespace as another target container. Listing 2: Joining Shared Memory Namespaces # remove the original consumer docker rm -v ch6_ipc_consumer # start a new consumer with a joined IPC namespace docker -d --name ch6_ipc_consumer \ --ipc container:ch6_ipc_producer \ allingeek/ch6_ipc -consumer Listing 2 rebuilds the consumer container and reuses the IPC namespace of the ch6_ipc_producer container. This time the consumer should be able to access the same memory location where the server is writing. You can see this working by using the following commands to inspect the logs of each: docker logs ch6_ipc_producer docker logs ch6_ipc_consumer Remember to cleanup your running containers before moving on: # remember: the v option will clean up volumes, # the f option will kill the container if it is running, # and the rm command takes a list of containers docker rm -vf ch6_ipc_producer ch6_ipc_consumer There are obvious security implications to reusing the shared memory namespaces of containers. But this option is available if you need it. Sharing memory between containers is safer alternative to sharing memory with the host.
The loop is the classic way of processing collections, but with the greater adoption of first-class functions in programming languages the collection pipeline is an appealing alternative. In this article I look at refactoring loops to collection pipelines with a series of small examples. I'm publishing this article in installments. This adds an example of refactoring a loop that summarizes flight delay data for each destination airport. A common task in programming is processing a list of objects. Most programmers naturally do this with a loop, as it's one of the basic control structures we learn with our very first programs. But loops aren't the only way to represent list processing, and in recent years more people are making use of another approach, which I call the collection pipeline. This style is often considered to be part of functional programming, but I used it heavily in Smalltalk. As OO languages support lambdas and libraries that make first class functions easier to program with, then collection pipelines become an appealing choice. Refactoring a Simple Loop into a Pipeline I'll start with a simple example of a loop and show the basic way I refactor one into a collection pipeline. Let's imagine we have a list of authors, each of which has the following data structure. class Author... public string Name { get; set; } public string TwitterHandle { get; set;} public string Company { get; set;} This example uses C# Here is the loop. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); foreach (Author a in authors) { if (a.Company == company) { var handle = a.TwitterHandle; if (handle != null) result.Add(handle); } } return result; } My first step in refactoring a loop into a collection pipeline is to apply Extract Variable on the loop collection. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); var loopStart = authors; foreach (Author a in loopStart) { if (a.Company == company) { var handle = a.TwitterHandle; if (handle != null) result.Add(handle); } } return result; } This variable gives me a starting point for pipeline operations. I don't have a good name for it right now, so I'll use one that makes sense for the moment, expecting to rename it later. I then start looking at bits of behavior in the loop. The first thing I see is a conditional check, I can move this to the pipeline with a . class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); var loopStart = authors .Where(a => a.Company == company); foreach (Author a in loopStart) { if (a.Company == company) { var handle = a.TwitterHandle; if (handle != null) result.Add(handle); } } return result; } I see the next part of the loop operates on the twitter handle, rather than the author, so I can use a a . class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); var loopStart = authors .Where(a => a.Company == company) .Select(a => a.TwitterHandle); foreach (string handle in loopStart) { var handle = a.TwitterHandle; if (handle != null) result.Add(handle); } return result; } Next in the loop as another conditional, which again I can move to a filter operation. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); var loopStart = authors .Where(a => a.Company == company) .Select(a => a.TwitterHandle) .Where (h => h != null); foreach (string handle in loopStart) { if (handle != null) result.Add(handle); } return result; } All the loop now does is add everything in its loop collection into the result collection, so I can remove the loop and just return the pipeline result. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { var result = new List (); return authors .Where(a => a.Company == company) .Select(a => a.TwitterHandle) .Where (h => h != null); foreach (string handle in loopStart) { result.Add(handle); } return result; } Here's the final state of the code class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { return authors .Where(a => a.Company == company) .Select(a => a.TwitterHandle) .Where (h => h != null); } What I like about collection pipelines is that I can see the flow of logic as the elements of the list pass through the pipeline. For me it reads very closely to how I'd define the outcome of the loop "take the authors, choose those who have a company, and get their twitter handles removing any null handles". Furthermore, this style of code is familiar even in different languages who have different syntaxes and different names for pipeline operators. Java public List twitterHandles(List authors, String company) { return authors.stream() .filter(a -> a.getCompany().equals(company)) .map(a -> a.getTwitterHandle()) .filter(h -> null != h) .collect(toList()); } Ruby def twitter_handles authors, company authors .select {|a| company == a.company} .map {|a| a.twitter_handle} .reject {|h| h.nil?} end while this matches the other examples, I would replace the final reject with compact Clojure (defn twitter-handles [authors company] (->> authors (filter #(= company (:company %))) (map :twitter-handle) (remove nil?))) F# let twitterHandles (authors : seq, company : string) = authors |> Seq.filter(fun a -> a.Company = company) |> Seq.map(fun a -> a.TwitterHandle) |> Seq.choose (fun h -> h) again, if I wasn't concerned about matching the structure of the other examples I would combine the map and choose into a single step I've found that once I got used to thinking in terms of pipelines I could apply them quickly even in an unfamiliar language. Since the fundamental approach is the same it's relatively easy to translate from even unfamiliar syntax and function names. Refactoring within the Pipeline, and to a Comprehension Once you have some behavior expressed as a pipeline, there are potential refactorings you can do by reordering steps in the pipeline. One such move is that if you have a map followed by a filter, you can usually move the filter before the map like this. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { return authors .Where(a => a.Company == company) .Where (a => a.TwitterHandle != null) .Select(a => a.TwitterHandle); } When you have two adjacent filters, you can combine them using a conjunction. class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { return authors .Where(a => a.Company == company && a.TwitterHandle != null) .Select(a => a.TwitterHandle); } Once I have a C# collection pipeline in the form of a simple filter and map like this, I can replace it with a Linq expression class Author... static public IEnumerable TwitterHandles(IEnumerable authors, string company) { return from a in authors where a.Company == company && a.TwitterHandle != null select a.TwitterHandle; } I consider Linq expressions to be a form of , and similarly you can do something like this with any language that supports list comprehensions. It's a matter of taste whether you prefer the list comprehension form, or the pipeline form (I prefer pipelines). In general pipelines are more powerful, in that you can't refactor all pipelines into comprehensions.
[This article was written by Sveta Smirnova] Like any good, thus lazy, engineer I don’t like to start things manually. Creating directories, configuration files, specify paths, ports via command line is too boring. I wrote already how I survive in case when I need to start MySQL server (here). There is also the MySQL Sandbox which can be used for the same purpose. But what to do if you want to start Percona XtraDB Cluster this way? Fortunately we, at Percona, have engineers who created automation solution for starting PXC. This solution uses Docker. To explore it you need: Clone the pxc-docker repository:git clone https://github.com/percona/pxc-docker Install Docker Compose as described here cd pxc-docker/docker-bld Follow instructions from the README file: a) ./docker-gen.sh 5.6 (docker-gen.sh takes a PXC branch as argument, 5.6 is default, and it looks for it on github.com/percona/percona-xtradb-cluster) b) Optional: docker-compose build (if you see it is not updating with changes). c) docker-compose scale bootstrap=1 members=2 for a 3 node cluster Check which ports assigned to containers: $docker port dockerbld_bootstrap_1 3306 0.0.0.0:32768 $docker port dockerbld_members_1 4567 0.0.0.0:32772 $docker port dockerbld_members_2 4568 0.0.0.0:32776 Now you can connect to MySQL clients as usual: $mysql -h 0.0.0.0 -P 32768 -uroot Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 10 Server version: 5.6.21-70.1 MySQL Community Server (GPL), wsrep_25.8.rXXXX Copyright (c) 2009-2015 Percona LLC and/or its affiliates Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> 6. To change MySQL options either pass it as a mount at runtime with something like volume: /tmp/my.cnf:/etc/my.cnf in docker-compose.yml or connect to container’s bash (docker exec -i -t container_name /bin/bash), then change my.cnf and run docker restart container_name Notes. If you don’t want to build use ready-to-use images If you don’t want to run Docker Compose as root user add yourself to docker group
TROLEE is online shopping portal based in India tendering fashion products to the customers worldwide. TROLEE offering wide range of products in the category of designer sarees, Salwar kameez, Kurtis, Exclusive Wedding Collection, Indian designer collection, Western outfits, Jeans, T-shirts, and Women’s Apparels at wholesale price in India. Metaphorically, TROLEE has been known as Shopping Paradise as customer always feel to Shop bigger than ever in each events organized by TROLEE. On each order shipping facility available free of cost in India and delivery can be done Worldwide. We have been appreciated by our customer for the Best Festival Offers and discounts with Assured Service, quality products. Just visit us trolee.com