Search
Friday 29 May 2020
  • :
  • :

Development Tools for 2015. Looking for the Best!

Development Tools for 2015. Looking for the Best!

In the Digital pages of InfoWorld.com we always like to read about the Top Development tools.  They like to give awards to them, below we listed just a few.. of the Top Tools they listed on their Slide show InfoWorld’s 2015 Technology of the Year Award winners.  As a custom software development company, Telliant Systems is happy to report we use many if not all of these top tools.  Please check out this list.  It gives you a better understanding of why the tools have rated so well among the dev community. Along with Infoworlds list Software Development Times: SD Times also has their list of the Best ALM & Software Development Tools Companies: Best in Show ALM-Development Tools.

If you are a company that is looking for the right partner to create your next big product, or if you need that partner to enhance your features or performance upgrades or maintain your enterprise applications, look for one who uses the latest technologies to make your applications better!

HTML5  HTML5 logo

Despite all the logistical, technical, and philosophical struggles that have dogged HTML5 and its implementations over the course of its seven-year gestation period, the Web standard is at last an actual, ratified standard. It’s not buried in the pages of an obscure ISO spec document, but is in active use on millions of websites, in billions of browsers, and in countless desktop and mobile applications.

HTML5 owes a good deal of its success to the browser-makers — mainly the teams at Google and Mozilla whose perpetual update cycle allowed bleeding-edge HTML5 features to be used in the real world. Unlike the ill-fated XHTML, HTML5 became an easily adopted default for new websites and applications, and it provided a unified way to handle a plethora of tasks previously relegated to external plug-ins, such as pixel-accurate drawing, video and audio, external data storage, geolocation, and speech synthesis and recognition.

Not every feature in HTML5 has made people happy, though. The Encrypted Media Extension standard, for instance, bristled hairs in the free-software and open-Web communities (though Tim Berners-Lee himself gave it the thumbs-up). But the whole of HTML5 has brought a Web that is closer to being a platform, one that runs across a panoply of devices and with fewer external dependencies than ever.

— Serdar Yegulalp

 

Famo.us    Famo.us image

Famo.us is the only JavaScript framework that includes an open source 3D layout engine fully integrated with a 3D physics animation engine that can render to DOM, Canvas, or WebGL. The tools you need to build Famo.us apps and sites will always be free and available to everyone. Integrations between Famo.us and Angular, Backbone, Cordova, and jQuery are under way. Famo.us uses Node.js and Grunt in its tooling. A well-written Famo.us iOS or Android app can perform as well as a native app, while being much easier and faster to develop.

In addition to the framework and tools, Famo.us provides free training through interactive online lessons, branded as Famo.us University and currently offering four online courses: Famo.us 101 (basics); Famo.us 102 (layouts, transitionables, and animations); Layouts (header-footer, grid, flexible, and sequential); and Famo.us/Angular (Angular bindings with Famo.us layouts and transitionables). I expected a course in using the Famo.us physics engine by now, but no such luck.

— Martin Heller

 AngularJS    angularjs

AngularJS is a lightweight, open source JavaScript framework for building Web applications with HTML, JavaScript, and CSS, maintained by Google and the community. It offers powerful data binding, dependency injection, guidelines for structuring your app, and other useful features to make your Web app testable and maintainable. Its most notable feature, two-way data binding, reduces the amount of code written by relieving the server back end from templating responsibilities. Instead, templates are rendered in plain HTML according to data contained in a scope defined in the model.

You can bind an Angular module to a given section of an HTML document using the ng-app tag, and a controller using the ng-controller tag; the actual controller code lives in JavaScript code that is typically maintained in a separate file and included using a script src tag. Data binding locations in HTML markup are signified by mustache markup — for example, <span>{{remaining()}}…</span>, where the contents of the mustaches will be updated whenever the value of the code inside them changes.

The ng-submit tag can redirect a form submit action to an Angular method. AngularJS provides built-in services on top of XMLHttpRequest as well as various other back ends using third-party libraries. For instance, the AngularFire library makes it easy to connect an Angular app to a Firebase back end.

— Martin Heller

Node.js  Nodejs logo

Built on Chrome’s V8 JavaScript runtime, Node.js is a platform that allows developers to easily construct fast, scalable network applications. Node.js uses an event-driven, nonblocking I/O model that renders it lightweight and efficient, compared to, say, Java Web Pages or ASP.Net. Node.js also lets developers make code asynchronous without the mess of threads and synchronization. Node.js is well suited for data-intensive, real-time applications that run across distributed devices.

All is not entirely sweetness and light in the Node community, however. As reported elsewhere on InfoWorld, “Node.js devotees who are dissatisfied with Joyent’s control over the project are now backing their own fork of the server-side JavaScript variant, called io.js or iojs.” According to Mikeal Rogers of Digital Ocean, the idea of the fork is “to get the community organized around solving problems and putting out releases.”

— Martin Heller

Go    Go Programming Language2

The Go programming language is an open source programming language from Google that makes it easy to build simple, reliable, and efficient software. It’s part of the programming language lineage that started with Tony Hoare’s Communicating Sequential Processes, and it includes Occam, Erlang, Newsqueak, and Limbo. The top differentiating feature of the language is its extremely lightweight concurrency, expressed with goroutines. The project currently has more than 500 contributors, led by Rob Pike, a Distinguished Engineer at Google, who worked at Bell Labs as a member of the Unix Team and co-created Plan 9 and Inferno.

Go’s concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel type system enables flexible and modular program construction. Go compiles quickly to machine code, yet has the convenience of garbage collection and the power of runtime reflection. It’s a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language.

Goroutines, channels, and select statements form the core of Go’s highly scalable concurrency, one of the strongest selling points of the language. The language also has conventional synchronization objects, but they are rarely needed.

Goroutines are, to a rough approximation, extremely lightweight threads. Channels in Go provide a mechanism for concurrently executing functions to communicate by sending and receiving values of a specified element type. A select statement chooses which of a set of possible send or receive operations will proceed. It looks similar to a switch statement but with all the cases referring to communication operations.

— Martin Heller

 Docker     Docker log

All great innovations revolve around a simple idea. Docker, the open source application containerization system that started on Linux (and will head to Windows eventually) is based on an idea that’s as surpassingly simple and transformative as they come. Take an application, wrap it in a container that allows it to be easily deployed on any target system, and deploy it anywhere the Docker host is running. Apps run with the isolation of VMs, but they can be instantiated far more quickly, and they consume far less overhead.

Docker has done more than demonstrate a nifty way to package and isolate apps. It has sparked new approaches to devops, to turning applications into microservices and deploying them at scale, and even to designing the underlying operating system, whereby Linux itself is rebuilt around containerized applications as a basic unit of construction (see: CoreOS, Red Hat Atomic).

That said, Docker is young and protean. Questions linger regarding the project’s treatment of issues like networking and security in the long run. But a wide range of third-party contributors stand behind both the core technology and the needed auxiliary functionality like orchestration; most every cloud vendor is on board with Docker as a key technology; and the speed at which Docker is evolving makes a bold statement about its future.

— Serdar Yegulalp

GitHub  Github logo

GitHub is one of about 18 public Git hosting sites, and it supports both public, open source projects and private, proprietary code. Public repositories are free; private repositories cost money to host, but only about $1 per repo per month. Each repo can be up to about 1GB; that isn’t a terrible limit if you restrict yourself to storing source code and a reasonable number of small images, but you can run out of space quickly if you try to store binary builds, media, external dependencies, backups, or database dumps. GitHub will warn you if you push files past 50MB and will reject files exceeding 100MB.

Where GitHub differentiates itself is in the social aspects of coding and in its client software. While you can’t really use Git effectively unless you can drop down to the command line at need, the GitHub client does a good job of implementing the Git features you need on a daily basis, and it automatically updates itself. In addition, the client integrates with GitHub’s very nice, free Atom programming editor, which in turn integrates well with GitHub repositories.

The social aspects of GitHub – following people, forking and watching projects, making pull requests, reporting issues, and sharing Gists – are important enough that I tell developers they should be on GitHub no matter where else they keep or use code repositories. Plus, some of the most important and popular Open Source projects are on GitHub, including Bootstrap, Node.js, Angular, jQuery, D3, Ruby on Rails, and the Go language.

— Martin Heller

 JetBrains WebStorm   jetbrains webstorm

JetBrains’ WebStorm is a modestly priced IDE for HTML, CSS, JavaScript, and XML, with support for projects and version control systems including GitHub. WebStorm is more than an editor, though it’s a very good editor. It can check your code and give you an object-oriented view of your project.

Code inspections built into WebStorm cover many common JavaScript issues as well as issues in Dart, EJS, HTML, Internationalization, LESS, SASS, XML, XPath, and XSLT. WebStorm supports code checkers JSHint, JSLint, ESLint, and JSCS.

In addition to debugging Node.js applications as well as tracing and profiling with Spy-js, WebStorm can debug JavaScript code running in Mozilla Firefox or Google Chrome. It gives you breakpoints in HTML and JavaScript files, and it lets you customize breakpoint properties.

When debugging, a feature called LiveEdit allows you to change your code and have the changes immediately propagate into the browser where you are running your debug session. This saves time and helps avoid the common problem of trying to figure out why your change didn’t do anything, only to discover that you forgot to refresh your browser.

For unit testing, WebStorm bundles the JsTestDriver plug-in. This was originally a Google project, but JetBrains is now contributing to it. In addition, WebStorm can integrate with the Karma test runner. For either testing method, WebStorm tracks code coverage.

— Martin Heller

  JetBrains IntelliJ IDEA     jetbrains idea

JetBrains’ IntelliJ IDEA is a Java IDE available both in an open source Community edition and in a paid-for Ultimate edition. What makes IntelliJ IDEA so compelling is its many innovative development accelerators that speed the process of getting code out of your head and into the computer. For example, its multicursor capability relieves you from having to repeatedly enter the same text at multiple locations: Set a cursor at each spot the text must be added, and type the text once — it appears simultaneously everywhere you specified. Other productivity enhancements include the find action — type open, and IntelliJ will find all operations in the IDE that pertain to the action of opening something (it’s even clever enough to recognize that you might have meant “importing”).

Granted, we wish the Community edition were equipped with the sorts of J2EE development tools found only in the Ultimate edition: database tools, support for frameworks such as JPA and Hibernate, deployment tools for application servers like JBoss AS, WildFly, and Tomcat. Nevertheless, the Community edition makes a fine Java application development platform that also gives you Android tools, as well as support for other JVM languages like Groovy, Clojure, and Scala (the last two via free plug-ins). Whichever version of IntelliJ IDEA you use, you’ll find a rich array of tools designed to simplify otherwise tedious development chores.

— Rick Grehan

Microsoft SQL Server 2014  SQL server logo

Microsoft SQL Server 2014 was the most significant SQL Server release since 2008, and it carried two main themes: cloud and speed. For me, the high note is definitely speed, specifically OLTP performance, which Microsoft has addressed with a number of new features. In-memory tables, delayed durability, buffer pool extension, and updateable columnstore indexes are at the top of this list.

In-memory tables offer a turbo boost through a combination of optimized algorithms, optimistic concurrency, eliminating physical locks and latches, and of course storing the table in memory. This feature is brand new and still quite limited (do your homework first), but as the limitations are removed it will start reaching a much wider audience.

Columnstore indexes have matured since they were added in 2012, and now that they’re updateable (i.e., you no longer have to drop and recreate them), they will be much more usable to the general public. Resource Governor finally gets physical I/O control, where you can limit the amount of I/O per volume for a process. This will keep those I/O hogs from taking over your system.

Last and definitely least in my book are the Azure enhancements. You can now back up your database to Azure blob storage. You can even use Azure blob storage to store the data and log files of an on-premises database. While each feature may have its place, I think they come with more caveats than benefits. I don’t believe that housing your database files across the Internet will be an advantage to many shops.

— Sean McCown

 

Tableau     Tableau

Tableau is on the cutting edge of the latest generation of business intelligence and reporting tools. The product differentiates itself from its antecedents with a strong focus on easy-to-use visual analytics and powerful features for exploratory data analysis. Tableau is available as a traditional desktop application, a Web-based tool, and a cloud offering. Users of the desktop app can create workbooks of charts and analyses that can be published to Tableau Server for browser-based access by any authorized user.

Tableau Software was founded in 2003, as a spin-off from Stanford University, where researchers had been working on new techniques for visualizing and exploring data. Dating back to the Stanford days, the goal behind Tableau was to combine structured queries and graphical visualization into an easy-to-use combination, allowing nonprogrammers to interactively explore data contained in relational databases, data warehouse cubes, spreadsheets, and other data stores.

Today, Tableau has joined the big data world by integrating data stored in Hadoop and other NoSQL stores. Recent Tableau releases include Mac support and features such as “story points” for linking data into a narrative structure, updated mapping tools, and a “visual data window” for visually defining joins between multiple tables.

— Phil Rhodes

 

Neo4j     neo4j-logo-2015

A graph database is an excellent place to stash data that involves relationships among its elements: Social networks, shipping routes, and family trees are a few examples that spring to mind. Neo Technology’s Neo4j is a graph database that’s simultaneously short on learning curve and long on scalability. Neo4j can be run either as an embedded Java library or in client/server fashion, which means it can tackle tasks small to large (a Neo4j database can house up to 34 billion nodes and an equal number of relationships).

Even better, Neo4j is ACID-compliant and provides two-phase commit transaction support. Its API, which is accessible from a variety of popular languages, includes a number of important “shortest path” algorithms, which you can modify with cost-evaluation functions to create “cheapest path” equivalents. If you’d rather not control your Neo4j database from a language API, you can always turn to its declarative graph query language, Cypher, the Neo4j equivalent to SQL.

Neo4j is available in a free open source edition, as well as an enterprise edition that adds clustering, caching, backups, and monitoring capabilities. Want to know more? Take a few minutes to explore Neo Technology’s excellent, interactive online documentation, which guides you through graph database concepts, lets you execute Cypher code against a temporary database, and allows you to view the results graphically in real time.

— Rick Grehan

 Apache Spark Apache spark

The current darling of the Hadoop world, Apache Spark provides a suite of applications bound together by a common data structure. Designed by the AMPLab at UC Berkeley to solve machine learning problems on a Hadoop cluster, Spark takes a strictly functional approach to programming. Spark’s design allows you to easily share code and predictive models among the stack components by passing the underlying data structure, the RDD (resilient distributed dataset). Written in Scala, Spark also offers language bindings for Python and Java, and third-party DSLs are available for Clojure and Groovy.

Spark has four components in its ecosystem: SparkSQL, MLlib, Spark-Streaming, and GraphX. MLlib is Spark’s sweet spot, greatly increasing the speed with which machine learning models can be built. Compared to some of the Hadoop machine learning veterans like Mahout, Spark is hands-down the winner in almost every respect. With a large, vibrant community that is actively adding new algorithms on a three-month release cycle, it won’t be long before Spark catches up in functionality to Python and R, for machine learning, but able to do it at scale.

— Steven Nunez

Apache Storm    Apache Storm

Storm is one of a new breed of real-time stream computing platforms that have emerged over the past year or two. The project began life with a company called BackType, which was eventually acquired by Twitter. After the acquisition, Twitter open-sourced the project, and it had an immediate impact on the big data landscape. Developers looking to perform real-time, incremental computations over streams of data immediately jumped on the Storm bandwagon, and the project has done nothing but pick up steam since.

After being hosted on GitHub and maintained largely by Twitter, Storm was eventually transitioned to the Apache Software Foundation incubator. Storm graduated the incubator and become an Apache Top Level Project (TLP) in September 2014. The latest release, 0.9.3, was delivered in November.

Storm provides scalability for performing distributed computation on streaming data. Components (spouts and bolts) are assembled into “topologies” that run forever by default, with the individual component instances running on multiple nodes within the cluster. The sequencing of computational steps is defined as a Directed Acyclic Graph where messages (tuples of fields) are passed from component to component according to the execution graph. This architecture facilitates easy recovery from failure and permits Storm clusters to scale to support massive streams of data.

— Phil Rhodes

Apache Hive    Apache Hive

Traditional data warehouses are under pressure. Companies now want to analyze unprecedented amounts of data, and they aren’t willing to wait six months for the answer. Apache Hive is a good place to store EDW (enterprise data warehouse) data that isn’t frequently used, relieving some of the stress on the data warehouse. It is also a great ETL (extract, transform, load) tool. By landing data in Hadoop and processing it in Hive before loading into the EDW, processing loads are lower too. By keeping less frequently used data in a “warm” location like Hive, it is still accessible to auditors, regulators, and data scientists, continuing to add value to the enterprise, but at lower storage and processing costs.

Though typically deployed to augment the data warehouse, the newest Hive, 0.14, boasts impressive capabilities itself. Traditionally a write-once, read-often system, updates and inserts were always problematic in Hive. With version 0.14, Hive added SQL transactions for insert, update, and delete, with ACID semantics, bringing Hive much closer in capability to traditional EDWs. The Apache community is rapidly closing the gap too: Full SQL 2011 semantics and subsecond queries are coming up next. The day may not be too far away when Hive is your sole EDW solution.

— Steven Nunez

 

Apache Hadoop      Apache hadoop

Hadoop, the primogenitor of big data tools, has a plethora of newer, younger competitors nipping at its heels these days. But the Hadoop developers are not resting on their heels, and the technology continues to reinvent itself to remain at the forefront of innovation in the data management and analytics space.

The biggest development was the modularization effort that decoupled the MapReduce API from the underlying scheduling and resource management facilities, giving the world YARN. Using YARN, a Hadoop cluster can serve as a general-purpose compute cluster, capable of hosting jobs using a wide range of compute models. YARN can host graph processing jobs using Apache Giraph, Bulk Synchronous Parallel computations using Apache Hama, and Apache Spark jobs. There is even an implementation of MPI for YARN.

But YARN is not the only news from the Hadoop camp. Over the past few releases, Hadoop has added improvements and new features related to security, encryption, rolling upgrades, Docker support, tiered storage in HDFS, improved REST APIs, POSIX extended file system attributes in HDFS, improved Kerberos integration, scheduler improvements, and much more. While Hadoop may be thought of as the graybeard of big data platforms, it remains a spry, nimble, fast-moving project with plenty of youthful vigor and energy. Hadoop competitors are finding that catching up to — or passing — Hadoop is no easy challenge.

— Phil Rhodes

 

Apache Cassandra     Apache Cassandra

Cassandra is a distributed database that began life as sort of a combination of Google’s Bigtable and Amazon’s Dynamo, but has since evolved to incorporate architectural elements from both key-value and column-oriented data stores. Cassandra clusters can grow to more than 1,000 nodes, simultaneously providing high throughput and significant data protection.

Cassandra claims near-linear scaling of read and write operations as nodes are added to a cluster. Work and responsibility are equally distributed among the nodes: an I/O request can be sent to any cluster member, and there is no single point of failure. While Cassandra supports eventual consistency, consistency of both read and write operations can be tuned. Recent releases of Cassandra have added row-level isolation on write operations, thus providing full write consistency on a per-row basis.

Best of all, CQL — the Cassandra Query Language — is a route for developers familiar with relational database systems to begin work with Cassandra. CQL was deliberately fashioned after SQL, and CQL’s designers have done a fine job of incorporating Cassandra’s unique capabilities into the language (such as expressing Cassandra’s “lightweight transaction” mechanism as a CQL element) without making CQL too obscure for new developers. In fact, CQL is becoming the primary programming interface for Cassandra and even supports prepared statements — and their accompanying performance benefits.

— Rick Grehan

As a Company that is looking for the right partner to create your next big product, or if you need that partner to enhance your features

 

 




Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.