This blog was originally published on Gummicube’s website.
Programming languages, development & App Store Optimization strategies, frameworks, methodologies, libraries, APIs, user experience design tools, and consumer preferences are constantly evolving. This is largely due to the pressure to innovate and to keep up with ever-changing software technologies. Companies are always looking for a better way to achieve their goals while delivering greater customer experiences.
In 2018, Artificial Intelligence (AI), Machine Learning, and Blockchain reached an all time high. They received endless amounts of praise and even the Super Bowl was swarmed with AI and robot themed commercials. The idea of using blockchain in software development continues to be fleshed out and we’re beginning to figure out the best types of problems that AI can solve.
Every year new software development trends come and go. As new products go-to-market and old ones explore new avenues, developers discover new ways to make a difference in the software development industry.
The top software development trends of 2019 have kicked into full gear and these are the ones we think you should keep an eye on.
In the mobile hemisphere, there are 4 dominant programming languages; Objective-C or Swift for iOS, and Java or Kotlin for Android. While Swift has completely taken over new iOS development, a similar trend is beginning to play out for Android. Google recently announced that they recommend Kotlin for all future apps, so developers are quickly making the switch. Kotlin’s popularity in some regions has seen a more than sixfold increase.
Companies like Uber, Square, Pinterest, Amazon, and Netflix have already declared that they’ve moved on to using Kotlin.
Kotlin was designed with a strong focus on its interoperability with its elder sibling Java. It means you can translate any Java code into Kotlin or vice versa without changes in their operability. One of the major shortcomings of Java was its excessive verbosity. Kotlin reduces the amount of boilerplate code that developers need to write.
“Kotlin conciseness helps to increase a cut in the lines of code by approximately 40%, according to JetBrains”
This significant reduction in the lines of code translates to a decreased number of bugs.
Since Kotlin is open source, there are no adoption costs. You can begin incorporating it by translating it via a Java-to-Kotlin converter. A much more common, and less risky scenario would be to write all new code in Kotlin while leaving your old Java code as is.
Kotlin is compatible with all Java libraries and frameworks, however, they are still two different languages. As a new language, the Kotlin programming best practices are yet to be defined.
“Kotlin is easy to get started with and can be gradually introduced into existing projects, which means that your existing skills and technology investments are preserved,” JetBrains’ CEO Maxim Shafirov wrote.
Although Kotlin is not revolutionary, it’s a noteworthy enhancement of the Java ecosystem that enables additional functionality. The ability to coexist within a project opens a door of opportunity for developers, while providing the peace of mind that their Java code doesn’t need to be re-written.
JavaScript is the most popular amongst developers for web, but we’re experiencing a shift to TypeScript.
JavaScript is loosely typed. You don’t have to tell that a string is a string, nor do you require a function to accept an integer as its parameter. This gives JavaScript a lot of flexibility, which allows you to move faster and iterate quickly.
A strong type system instead gives much more structure to a program and it’s a great aid when working in teams. A single programmer can’t have all the codebase in mind when working on it, so having types helps keep the code manageable.
TypeScript is bringing the world of strongly typed languages, long popular with server developers, to applications that run in the browser. Strong typing gives developers several advantages: they can see what kind of thing a variable contains, they can navigate their project more easily, and they get an extra check for correctness with tools that validate the correct types are being used. These features are especially desirable on larger projects, especially with large teams working together. Also, since TypeScript is a superset of JavaScript, and thus shares the same syntax, it’s quite easy for JavaScript programmers to learn.
Being backed by a tech giant also boosts TypeScript’s steady rise. Microsoft isn’t the only company trying to tackle the issues TypeScript addresses though. Google released a programming language called Dart in 2011 that had similar goals. TypeScript has been more successful than Dart, in part because unlike Google’s language, which has its own syntax, TypeScript uses JavaScript’s existing syntax—making it easier for programmers who already know JavaScript to learn TypeScript.
Frameworks like React Native (RN) or Xamarin are cross-platform development tools. The two different mobile platforms, iOS and Android, each have their own sets of libraries and functions. Usually writing an app for both platforms means either knowing the inner workings of both, or having separate developers. These cross-platform frameworks allow a single codebase to work on both platforms. This code reuse can minimize the project’s duration, expense and time-to-market.
Out of the frameworks available, Xamarin and React Native (RN) are the most popular. Xamarin apps are written in C#, which about 37% of professional developers are proficient in, and React Native utilizes JavaScript, which about 67% of professional developers are proficient in, according to a 2017 Stack Overflow survey. Both are open source frameworks but the development environment of each will vary depending on your platform.
Xamarin is the more mature platform, with the first version having been released in 2011. Microsoft acquired the company in 2016, so continued support and development seems assured. Prior to 2014 Xamarin apps required separate UI code for the different platforms, then Xamarin Forms was released allowing developers to have a single codebase for all elements of an app on all platforms.
React Native has been picked up by many large and significant tech players recently, such as Walmart and Tesla, which has led React Native’s developer community to grow much faster than Facebook had expected. To date, over 1.6K contributors have committed code to this framework’s codebase, and this cycle has only supported in the emerging popularity of RN.
This emerging popularity does come with a problematic loophole however, due to the newness of React Native, the framework updates have frequently broken existing apps and led to many unplanned hours of development time.
React Native and Xamarin apps are developed to be compatible with any selected mobile platform. The native components built into the frameworks allow them to essentially feel native. Thus, everything a user can see in the React Native/Xamarin-based app, the developer can see (as similar as possible to the native one).
With computers as fast as they are compile time is almost never an issue. However, tooling available on Xamarin is much more proficient as the environment is on Microsoft Visual Studio. React Native does not have as sophisticated tooling, but is host to a larger community of support for Javascript thanks to web forums.
When it comes to debugging these issues, Xamarin provides the tools within the Visual Studio system, where as in the case of React Native, a third-party system is recommended such as Reactotron. Thus, depending on your skills, abilities, and preferences when developing, either Xamarin or React Native could be the right choice.
However at the end of the day, even if an app is developed with the latest framework, it’s important that it implement App Store Optimization best practices to ensure the app can be discovered on whichever platform it goes live on.
Microservices are essentially an architectural style in which software applications are designed as suites of independently deployable services. This enables two key things: the continuous delivery of large, complex applications, and for an organization to evolve its technology stack. According to IDC, “by 2022, 90% of all apps will feature microservices architectures that improve the ability to design, debug, update, and leverage third-party code”.
Microservice architecture works so well due to developer independence, isolation resilience, and scalability. Small teams work in parallel and can iterate faster than large teams. If a component dies, you troubleshoot or create another, while the rest of the application continues to function. Smaller components take up fewer resources and can be scaled to meet increasing demand of that component only. Individual components are easier to fit into continuous delivery pipelines and complex deployment scenarios not possible with monoliths.
This concept of separating applications into smaller parts is not a new one. Microservices depend not just on the technology being set up to support this concept, but on an organization having the business culture and know-how for development teams to be able to adopt this model. Microservices are a part of a larger shift in IT departments towards a DevOps culture, in which development and operations teams work closely together to support an application over its entire lifecycle, going through a rapid or even continuous release cycle rather than a more traditional long single-output cycle. Microservices have many benefits for Agile and DevOps teams – as Martin Fowler points out, Netflix, eBay, Amazon, Twitter, PayPal, and other tech stars have all evolved from monolithic to microservices architecture.
The ability to automatically deploy entire application environments is a key factor to optimizing the average time it requires to take features from idea to interactive product for your (paying) customers, and provides the concept of continuous delivery. The buzz right now is the “CI/CD pipeline” which stands for continuous integration/continuous delivery. The idea is that as soon as code is written it is automatically merged into the main code base and deployed (assuming that tests pass) so you no longer have to wait months for the next scheduled deploy to get your feature.
By Berin Catic
Manual deployments, frankly speaking, are slow, inefficient, and time/labour intensive. Most of all though, they are not repeatable. A developer will build and deploy from their own machine, unaware of all the things they have installed that allows them to do so. When you try to use a different machine however, nothing works. Automated deployment on the other hand is much more reliable, there is no such thing as ‘human error’, ‘forgetting tasks’ or any sense of inconsistency with them, as all tasks are perfectly replicable.
The fact that anyone can deploy, in the sense that software release knowledge no longer has to be one person’s expertise, but rather is stored on the system. Here, code functions as infrastructure. You can define what hardware is needed in deployment scripts, so now it is not just the software being deployed but also virtual computers, databases and other SaaS solutions. This entire system, if successful can be copied over and reused.
Validation of those deployments happens behind the scenes and team members may only need to spend further time on a deployment if something has actually gone wrong. As a result, development team can spend more time creating great software. All in all, automated deployment increases developer productivity and quicker releases, which in turn, leads to happy customers at the end of the day.
A great example of the CI/CD pipeline is the Google Chrome web browser.
For more information, here’s an article on the more technical aspects and factors of Automated Deployment.
Put simply, IoT is the addition of computing power and an internet connection to everyday physical devices such as your fridge, washer, dryer, sound system, toaster, doorbell, watch, lights, etc. IoT has both consumer and industrial applications (such as DigitalTwin solutions for any types of complex machines, automotive, irrigation + agriculture, construction, etc.) Embedded with electronics, internet connectivity and other hardware forms, these devices can communicate, interact and respond to one another over the internet and can be remotely monitored and controlled. Here are some examples on the vast utilization and market of IoT integrations.
“It’s about networks, it’s about devices, and it’s about data,” Caroline Gorski, the head of IoT at Digital Catapult explains.
The issue that has come up with IoT taking off however, is the security of this network, device or data. Adding computing power and connectivity to nearly everything makes these devices highly susceptible to hackers – everything that is connected to the internet can be hacked. Here’s a look into Cybersecurity in the Digital World and some notable incidents of data breaches. The reason this issue still occurs is because IoT is such a rampant trend that has been picked up without consideration into safety and data security – moving forward, this would definitely need to be something that is delved into.
There’s also the issue of surveillance. If every product becomes connected then there’s the potential for unchecked observation of users. While in the realm of data science this seems like a wonderful asset, the problem it carries is how this data is used, by whom, and who is targeted through results.
At the very heart of a successful IoT network implementation lie reliable standards. Due to the newness of the device-connectedness of the digital world, there are no security standards or frameworks set in place. Thus, developers will have to take measures to ensure that their users feel that their data is secure.
“IoT offers us [an] opportunity to be more efficient in how we do things, saving us time, money and often emissions in the process,” Matthew Evans, the IoT program head at techUK, says.
IoT is a huge tool not just for personal users in increasing universal design, and making day-to-day processes more pragmatic, but it also lends greatly to many industrial applications. The market for IoT is huge. For example given the impending nature of climate change and resource mitigation, take standard irrigation practices: Sensors can collect and communicate information about how much to water the crops, regulate at what time, soil texture, sunlight exposure, schedule water times, and more. The entire process of farming crops can be automated and made to be more effective. While most IoTs seem harmless or are created with improvement or the betterment of a process in mind, the fact remains that data holds power, and who has access to this data remains something of concern. This matter has created an overall contest between security and efficiency that other software trends are seeing as well.
“Traditional business applications have always been very complicated and expensive. The amount and variety of hardware and software required to run them are daunting. You need a whole team of experts to install, configure, test, run, secure, and update them.” -Salesforce
Overall, the cloud holds three key advantages that allow for the potential of growth it offers: adaptability, scalability, and security.
First and foremost, cloud computing is an ever-adaptable tool. Due to the fact that your program is now de-localized, it can be accessed and deployed remotely – increasing the efficiency and effectiveness of your business functions. This kind of agility allows for businesses to be able to run all kinds of apps in the cloud, like customer relationship management (CRM), HR, accounting, and much more. Not having moved onto the cloud puts your business at a clear disadvantage.
There’s a number of different ways to leverage the cloud – from document storage and collaboration/communication platforms, to data management and network platform services.
In terms of scalability, cloud computing serves as an easy way to deal with costly IT maintenance and support in-house, and users no longer have to worry about power or storage capacity in terms of data or software applications/programs. The shared infrastructure means it works like a utility: You only pay for what you need, upgrades are automatic, and scaling up or down is easy. Additionally with the IoT trend ever-expanding, it is paramount that programs can integrate with a variety of devices and can be modified at a later date to acustom new additions or changes.
Finally, moving away from local and singular devices causes concern for security, however, cloud computing platforms have accounted for this and each offer some type of resource and guarantee for security and data protection. For further information, here is a quick guide on Which Cloud Platform is Right for You.
Allowing a whole host of companies to harness facial recognition for the first time would be a real game changer. Right now, as data is power, it would bring the technology to a whole range of new sectors. Whilst it’s still early days and met with a significant amount of concerns or disapproval, brands should start to reflect on the current use cases for the technology and consider how it could be used to augment and enhance the user experience of their existing apps. Apple is expected to bring facial recognition software to more mainstream popularity as the year progresses. Keeping a close eye on early applications will be crucial in helping brands spot new opportunities afforded by facial recognition technology.
Try this article to learn more about Facial Recognition Technology and its applications.
The blockchain is a simple yet ingenious way of passing information from point A to point B in a secure and fully automated way, and then keeping accounts of those transactions on a decentralized ledger. The key to the operation of a distributed ledger is ensuring the entire network collectively agrees with the contents of the ledger; this is the job of the consensus mechanism. Thus, the idea of an active and engaged community is central to blockchain and subsequently any type of cryptocurrency.
Blockchain is broken down into two basic verification systems: Proof of Work (PoW) and Proof of Stake (PoS).
Proof of Work is a highly intensive process in which to add a block to a chain, a complex generated algorithm known as the “Proof of Work Problem” must be solved. The process of solving this mathematical puzzle is termed mining, and the first miner across the network to find the right solution and validate that block is given a cryptocurrency reward (ie. a processing fee). The probability of mining a block is determined by how much computational work is performed by that miner. The puzzles are asymmetric, meaning it is difficult for miners to solve but the correct answer is easily verified by the network.
Once a block of transactions has been verified, it is added to the blockchain, a public transparent ledger. This process of large scale complex algorithm validation means that no single individual can bring the server down and acts as a deterrent to distributed denial-of-service attacks (DDos), as to target that information, a computer would have to be more powerful than 51% or the majority of the network.
Proof of Stake (PoS) is the solution to the “bugs” in the PoW verification process: time and efficiency. Right now PoW mining consumes immense amounts of time and computational power, to the point where the amount of energy consumed and heat produced is actually heating up our earth. In stead, PoS offers a different validation structure. The process is still an algorithm, and the purpose is the same as the proof of work. With PoS though, there is no mathematical puzzle, instead, the creator of a new block is chosen in a deterministic way. The probability of validating a new block is based on how much stake a person holds in that cryptocurrency.
A key component of the Proof of Stake system is higher energy efficiency. By cutting out the energy-intensive mining process, Proof of Stake systems may prove to be a much greener option compared to Proof of Work systems. Additionally, the economic incentives provided by Proof of Stake systems may do a better job of promoting network health. Under a Proof of Work system, a miner could potentially own zero of the coins they are mining, seeking only to maximize their own profits. In a Proof of Stake system, on the other hand, validators must own and support the currency they are verifying, which backs the essential nature of a community for the blockchain ledger to operate.
“The practical consequence […is…] for the first time, a way for one Internet user to transfer a unique piece of digital property to another Internet user, such that the transfer is guaranteed to be safe and secure, everyone knows that the transfer has taken place, and nobody can challenge the legitimacy of the transfer. The consequences of this breakthrough are hard to overstate.” – Marc Andreessen
Beyond being so virtually secure, BlockChain allows for no transfers costs or fees. Not only can the blockchain transfer and store money, but it can also replace all processes and business models which rely on charging a small fee for a transaction. Or any other transaction between two parties. As it is essentially a non-hackable, immutable data structure with ranging applications, BlockChain seems to be a transaction and data ledger trend that will only continue to grow moving forward and seek to legitimize the quality of any software program.
As we approach the half-way point of 2019, implementing these trends is crucial in growing your business. If you’re looking for a software development partner, you should make sure they’re keeping up. Adapt to new trends and avoid falling behind. The biggest challenge is identifying which trends are worthwhile.
Although some of these trends have been around since before, they’ll gain significant traction during the rest of 2019 and continue to exist well into the future. Until replaced by another trend, that is!