Pydantic can be used with any Python-based framework and it supports native JSON encoding and decoding as well. Here, learn how simple it is to adopt Pydantic.
NAS is not a new technology but still plays a crucial role in providing capable data storage and accessibility through centralized storage connected to a network.
For developers having worked on J2EE web applications for many years, getting into Flex will seem both very fun and familiar of the simplicity and power of ActionScript and the UI framework, and quite tedious and frustrating when it comes to developing the core application logic and the server integration. In some ways, developing Flex applications with widely used frameworks like Cairngorm and BlazeDS involves a lot of plumbing code (business delegates, service facades, conversions between JPA entities and value objects,...) and will remind you of the days of Struts and EJB2. Data is King Let’s not pretend like data isn’t and won’t continue to be a nearly invaluable tool that companies and individuals alike can use to make a lot of progress towards their end goals. If we look at things objectively, it is clearly possible that data is the most valuable resource known to man at this time. It seems like a good idea to both invest in data services and to build up one’s knowledge about what they are and why they are so important to the development of many projects at this time. If we can all learn a little more about this topic, then we stand a very good chance of being able to use data in the ways that it was designed to be used. The Granite Data Services project was started with the (ambitious) goal of providing Flex with the same kind of development model we were used to with modern J2EE frameworks. The GDS remoting functionality has been designed from the beginning to support the serialization of JPA/Hibernate detached entities and to easily connect to the most important J2EE frameworks (EJB3, Spring, Seam, Guice). In most cases, this removes the need to write and maintain service facades and value objects on the J2EE layer. In fact, that finally means that a Flex client can consume the exact same set of server services as a classic web application. Another repetitive task is to build the ActionScript model classes. GraniteDS provides the Gas3 tool that can automatically generate the ActionScript model classes from the Java data model. With the latest GraniteDS 1.1 release, the process is still improved by the Granite Eclipse builder plugin, which regenerates on the fly the necessary ActionScript classes whenever JPA entities are created or modified in the Eclipse project. You just have to write your JPA data model, and you can directly make use of the generated AS3 classes in the Flex UI layer. This is already a big step towards a more simple server integration in Flex, but GDS 1.1 brings an even simpler programming model. It targets only JBoss Seam for now but integration with Spring and EJB3 are already on the way. The Tide project aims at providing the same simplicity to build Flex/AIR applications that JBoss Seam has brought to J2EE. Tide requires almost no configuration during development and automates most of the plumbing code generally found for example in Cairngorm business delegates or service locators. Contrary to other Flex frameworks whose goal is to put all business logic on the client, Tide client/server interactions are exclusively done by method calls on exposed services, allowing to respect transaction boundaries, security, and validation rules defined by these services. Tide mainly consists of a Flex library that provides data-oriented functionality : An entity cache ensures that all managed entity instances are unique in a Tide context. This allows in particular to maintain correct data bindings between calls to remote services. A collection wrapping mechanism that enables transparent initialization of lazy collections. A Flex collection component integrated with JBoss Seam’s paged query component that can be used as a data provider for Flex UI components and supports remote sorting and filtering. Complete integration with Seam’s events, both synchronous and asynchronous, allows a Flex client to observe events raised by the server. Flex validators integrated with server-side Hibernate validator, allowing validation messages either on the fly, or after remote calls. Client support for Seam conversations. A lightweight component-based Flex framework that is deeply integrated with the other features and can advantageously replace Cairngorm or PureMVC. Let's have a look at a very simple example to see how this works. Seam component (simply extracted from the Seam booking sample): @Stateful @Name("hotelSearch") @Scope(ScopeType.SESSION) @Restrict("#{identity.loggedIn}") public class HotelSearchingAction implements HotelSearching { @PersistenceContext private EntityManager em; private String searchString; private int pageSize = 10; private int page; @DataModel private List hotels; public void find() { page = 0; queryHotels(); } ... private void queryHotels() { hotels = em.createQuery( "select h from Hotel h where lower(h.name) like #{pattern} " + "or lower(h.city) like #{pattern} or lower(h.zip) like #{pattern} " + "or lower(h.address) like #{pattern}") .setMaxResults(pageSize) .setFirstResult( page * pageSize ) .getResultList(); } ... public List getHotels() { return this.hotels; } public int getPageSize() { return pageSize; } public void setPageSize(int pageSize) { this.pageSize = pageSize; } @Factory(value="pattern", scope=ScopeType.EVENT) public String getSearchPattern() { return searchString == null ? "%" : '%' + searchString.toLowerCase().replace('*', '%') + '%'; } public String getSearchString() { return searchString; } public void setSearchString(String searchString) { this.searchString = searchString; } @Remove public void destroy() {} } MXML application: [Bindable] private var tideContext:Context = Seam.getInstance().getSeamContext(); // Components initialization in a static block { Seam.getInstance().addComponents([HotelsCtl]); } [Bindable] [In] public var hotels:ArrayCollection; private function init():void { tideContext.mainAppUI = this; // Registers the application with Tide } private function search(searchString:String):void { dispatchEvent(new TideUIEvent("search", searchString)); } Tide Flex component: import mx.collections.ArrayCollection; [Name("hotelsCtl")] [Bindable] public class HotelsCtl { [In] public var hotels:ArrayCollection; [In] public var hotelSearch:Object; [Observer("searchForHotels")] public function search(searchString:String):void { hotelSearch.searchString = text; hotelSearch.find(); } } Of course, this is an overly simple example but there is close to no code which seems unnecessary while having a clean separation of concerns between the UI, the client component, and the remote service. Building Flex applications could hardly be easier. There are a lot of choices out there today for creating rich Internet applications, each with its own set of advantages. When making the decision on which path to take, you want to get started easily but without sacrificing the ability to create a robust, scalable, and maintainable application. GraniteDS maintains this balance.
Over the last few days, I had the chance to test Datameer analytics solution (das). Das is a platform for Hadoop which includes data source integration, an analytics engine, and visualization functionality. This promise of a fully integrated big data analysis process motivated me to test the product. It really includes all required functionality for data management or ETL, it provides standard tools to analyze data and there are nice ways to build visualization dashboards. For example, there are connectors for Twitter, IMAP, HDFS, or FTP available. All menus and processes are self-explaining and the complete interface is strongly Excel or spreadsheet oriented. If you are familiar with excel you can do the analyses on your big data out of the box. For a fast on-the-fly analysis performance you only work with a subset of your data and the analyses you store will then be automatically transformed into a kind of procedure. In the end – or according to a schedule you set – you “run” the analyses on your big data: Das collects the latest data for you, Das creates MapReduce jobs in the background, and updates all your spreadsheets and visualizations. To close the analysis circle you can use the connectors to write your results back to HDFS or a database such as HBase or many more technologies. Analytics the Way You Need Them There are a lot of ways that Datameer can prove useful to you, and you don’t want to discount that fact. You will discover that it is likely the case that Datameer is a great way to display a large-scale amount of data in a format that is digestible and useful to you. We can only absorb so much data as individual human beings, but we can certainly use the information that we receive to make important decisions about our business, our products, and the future experiences that our customers will have as a result. Thus, it makes sense that there are many people who want to use Datameer as a means of getting this done. If you have felt what it is like to see all of your data on a big spreadsheet and be able to visualize what it looks like coming together, then you know that you need Datameer and similar products to help you get the results that you really need. Das is really designed for big data. If you test it with small data you will be frustrated by the performance – the overhead of creating MapReduce jobs dominates in this situation. But as soon as you start with real big data analyses this overhead gets negligible and das is taking over a lot of your programming work. My Test Infrastructure The following figure provides a nice overview of the Datameer infrastructure. Das supports many data sources, it runs on all Hadoop distributions, it provides a rest API and you can add plugins as connectors for other modeling languages such as r (#rstats). I tested das version 3.1.2 running on our MapR Hadoop cluster version 3.0.2. After getting the latest package version from Datameer support the installation was straightforward and it worked out of the box. Thanks to Datameer for providing a full test license. There are several online tutorials and videos available and there are some tutorial apps. Apps are another great feature of Datameer. You can download Datameer apps which include connectors, workbooks, and visualizations for different analysis examples. And you can create your own app from your analyses and share them with your colleagues or the community. My Test Data And Analyses I tested das with the famous “airline on-time performance” data set consisting of flight arrival and departure details for all commercial flights within the USA, from October 1987 to April 2008. I downloaded all the data (including supplements) to MapR FS, created connectors for the data, and imported the data into a workbook. In the workbook I tested many classical statistical counting analyses: Grouping functionality for the airports and counting the number of flights Grouping for the airlines and calculating different statistics as mean values for the air time Using joins to add additional information like the airline name to the airline identifier Doing sorts to extract the most interesting airports depending on different measures I am not an Excel expert. so it took me some time to get used to this low-level process of doing analysis on spreadsheets. But in the end, it is a very intuitive process of creating analyses. Every new analysis will be available in a new tab in your workbook. There are several nice functionalities to support your work. For example, there is a “sheet dependencies” overview which provides information about the dependencies between sheets. Apart from the classical analyses, das provides some data mining functionality. It is called “smart analytics”. So far, it covers k-means clustering, decision trees, column dependencies, and recommendations. It works out of the box but is not yet on the level to be satisfying for real analyses. e.g. For k-means clustering, there is no support for choosing the right number of clusters (k) and you can not switch between different distance functions (default is euclidean distance). Finally, I visualized all my results in a nice “infographic”. There are many different visualization tools and parameters available. After playing around with the settings you can create a nice dashboard and share it with your colleagues. Please be aware that the complete data set is about 5 GB. Importing the data set takes about 30 minutes and running the workbook took more than 3h in my case. In the end, I split my analyses into several workbooks to improve the feasibility. Summary It was easy to get started with Datameer analytics solution (das). It is definitely a great tool to do big data analyses without any detailed Hadoop or big data knowledge. Furthermore, it covers many use cases and provides all required functionality for your daily analysis process. However, as soon as your analyses get more complex, the limitations of Datameer become apparent and you will probably look for a more powerful toolset or start implementing your big data analyses directly on Hadoop. Finally, Datameer supports many steps in the big data analysis process, it works efficiently and the usability is straightforward. But big data is more than ETL, data analysis and visualizing the results. You should never forget to think about your use case and the business value that you want to extract from your data. In the end, this is what should guide you in choosing the tools and/or implementations to use.
We take a comparative look at two of the most popular databases on the market today, CouchDB and MariaDB, and what each brings to the table for your team.
Uber uses AI and ML for fraud detection, risk assessment, safety processes, marketing, matching drivers and riders, and just about everywhere else it's possible to apply.
Web analytics are different than API analytics. The right platform will aid growth with meaningful metrics. Learn what solution is best for your API product.