10 Tips for Structuring a React Native Project

10 Tips for Structuring a React Native Project:


When starting a new project, there are plenty of choices to be made around code style, language, folder layout, and more. Consistency is the key for creating clean, maintainable codebases. Therefore once decided, you’d usually need to stick with these choices for a while.

Time and experience will teach you what works and what doesn’t. But what if you don’t have time? You can always use someone else’s experience.

Here are my top 10 tips for structuring a React Native project:

1. Use TypeScript

Yes, there is a bit of a learning curve if you’re used to plain JavaScript.

Yes, it’s worth it.

Typed JavaScript makes refactoring a whole lot easier, and when done right, gives you a lot more confidence in your code. Use the guide in the docs for setup instructions. Make sure to enable strict mode ("strict": true in the compilerOptions).

You can also add type checking in your CI with tsc --noEmit, so you can be confident in your types!

2. Set up a module alias to /src

Set up a single module alias to /src (and a separate one for /assets if needed), so instead of:

import CustomButton from '../../../components/CustomButton';
Enter fullscreen mode Exit fullscreen mode

you can do:

import CustomButton from '@src/components/CustomButton';
Enter fullscreen mode Exit fullscreen mode

I always use a @ or a ~ in front of src to highlight it’s an alias.

I’ve seen implementations where folks set up multiple type aliases – one for @components, one for @screens, one for @util etc, but I’ve found a single top level alias to be the clearest.

There’s a handy guide for setting this up with TypeScript in the React Native docs.

3. Use Inline Styles

You have an option for using the built in inline styles, or Styled Components.

I started off with Styled Components, then switched to inline styles, because there used to be a performance implication, though that’s negligible, so now it’s just a preference.

4. One Style File Per Component

Each component should have their own style file with a styles.ts extension:

Enter fullscreen mode Exit fullscreen mode

Note, the .styles.ts in the filename is just a convention I use to indicate that the styles belong to the component, the TypeScript compiler will treat these as regular .ts files.

Each style file exports a single style object for the component:

// FirstComponent.styles.ts

import { StyleSheet } from 'react-native';

const styles = StyleSheet.create({
  container: {
    padding: 20,

export default styles;
Enter fullscreen mode Exit fullscreen mode

Each component only imports only its own styles:

// FirstComponent.tsx

import styles from './FirstComponent.styles';

Enter fullscreen mode Exit fullscreen mode

5. Use Global Styles

Create a globalStyles.ts file at the top level of the /src directory, and import it to the .styles.ts as needed.

Always use constants for:

  • colours
  • fonts
  • font sizes
  • spacing

It may seem tedious at first, but handy in the long term. And if you find you’re ending up creating constant for every single space, it’s something to gently bring up with the Design team, as design guides would generally not want that.

6. Flatten Style Constants

Instead of:

const globalStyles = {
  color: {
    blue: '#235789',
    red: '#C1292E',
    yellow: '#F1D302',
Enter fullscreen mode Exit fullscreen mode

Do this:

const globalStyles = {
  colorBlue: '#235789',
  colorRed: '#C1292E',
  colorYellow: '#F1D302',
Enter fullscreen mode Exit fullscreen mode

It can be tempting to group these, but I’ve found that keeping them flat can be more handy, e.g. if you wanted to replace all instances of colorRed in your codebase, you could do a find and replace, whereas with colors.red it’d be harder, since the colour could have been destructured.

7. Use Numbers in Style Constants

Instead of:

const globalStyles = {
  fontSize: {
    extraSmall: 8,
    small: 12,
    medium: 16,
    large: 18,
    extraLarge: 24,
Enter fullscreen mode Exit fullscreen mode

Do this:

const globalStyles = {
  fontSize8: 8,
  fontSize12: 12,
  fontSize16: 16,
  fontSize18: 18,
  fontSize24: 24,
Enter fullscreen mode Exit fullscreen mode

The first option may look nicer when writing it down, but during development, you don’t tend to care about “medium” and “large”, and just care about the number. And it will avoid the awkward naming when the designers inevitably add a font size 14 and you have to start calling your variables things like mediumSmall.

8. One Component Per File

Here’s the template for a new component:

import React from 'react';
import { View, Text } from 'react-native';
import styles from './App.styles';

const App = () => {
  return (
      Hello, world!

export default App;
Enter fullscreen mode Exit fullscreen mode

Some things to note here:

  • function components over class components: I’d always use function components and manage any state and side-effects using hooks
  • I use constant functions, but both const and function are equally good here. In fact function might be better in the long term
  • default export: I always use a default export, though there is an argument to be made that named exports are better since they’ll be clearer to refactor, and I agree – that might be the next step

9. Separate Components and Screens

Here’s a typical folder structure I end up with:

Enter fullscreen mode Exit fullscreen mode

I always separate components in the /components directory and the screens and modals in the /screens directory. When using react-navigation, there is no structural difference between screens and modals, but I prefer to also differentiate the intent by naming the file SomethingModal.tsx.

Another thing to note is the file names – rather than creating a folder with the file name, and naming each file index.tsx, the filename should reflect the component name. That is mostly for convenience – in most editors, it’ll get tedious to track down which file you’re editing when they’re all called index.tsx

I’ve also seen implementations where all components are imported to a single index.ts file and exported from there. I personally am not a fan of that solution and see it as an unnecessary extra step.

10. Lint Your Code

It’s worth it. Trust me!

  1. Use eslint and prettier – they actually come pre-installed when you initialise a new project
  2. Set up a pre-commit hook – I usually set up a pre-commit hook for linting and pre-push hook for tests. There’s a great guide here.
  3. Check lint, test and TypeScript errors on CI! This is so important – the only way to ensure a consistent code style across the project lifecycle. Setting up CI is one of the first things I do when starting a new project.

Hope this helps! Got any tips of your own that I did’t list here? Let me know in the comments

from Tumblr https://generouspiratequeen.tumblr.com/post/636230780074557442

How AR can change the future?

How AR can change the future?:


What is Augmented Reality (AR)?

AR is a sophisticated technology that enhances our experience of the world around us or in other words, it improves our visual perception of the environment. This technology superimposes digital information over any natural existing environment which means that we live in reality but augment it with additional information.

AR is not the replica of reality; rather it integrates and adds value to the user’s interaction with the real world. Earlier this technology was used extensively in the entertainment industry alone but recently it can also be seen widely used in manufacturing and healthcare industries.

AR in our everyday life-

Today, AR can be experienced on our own handheld devices, be it our smartphones or tablets.

I am sure you must have heard of the popular gaming app – Pokémon Go, which allowed the users to catch the Pokémon using their own smartphones cameras. All you had to do was simply download the app and search for Pokémon characters in your surroundings. Similarly, the video-see-through facial filters provided by Snapchat are also an example of AR. Snapchat allows users to project funny and sweet filters over their plain pictures.

These days AR also facilitates the home-buying process, where the home-buyers can view the property from their own devices, using the ‘virtual-tour’ option before catching sight of the house in-person. One can even use ‘furniture placement apps’ to see which furniture would look best in their house before actually getting the furniture home.

How many times have you bought a clothing from an online shopping store but had to return it because of the wrong size or maybe because the style didn’t suit you? To cut down the clothing returns, Amazon introduced a ‘virtual changing room’ app which uses AR to scan your body measurements, take more information about your choices and then recommend the best size and style for you. Isn’t that cool?

Location-based AR apps like Google Maps place digital directions on top of the real world. Google lens enhances the search experience where you can just open the app and aim it at the object you want to know about and it provides you with all the essential details associated with it.

How Augmented Reality will impact our future?

Augmented Reality has been raging in popularity over the past few years and this revolution is not stopping any soon. 
Undoubtedly, AR is going to change the future of education. It opens up a whole new dimension that allows us to experience in 3D what we would otherwise only see in 2D pages of our books. 
We will have a better gaming experience, easier online shopping and effortless home improvement. It is believed that the AR market will be worth between $70bn and $75bn by 2023. Studies also show that AR in the healthcare market will be worth $5.1bn by 2025.

I believe Augmented Reality is going to be many folds bigger than it is today. We will be able to use AR to help surgeons visualize what the body looks like. Product designers will be able to rapidly prototype new ideas and see those ideas come to life in the world around them, and engineers will be able to see the instructions overlaid onto the physical world.

If you are an artist, an architect, or a dress designer, AR is going to radically change not only the way that you create content but the way that you work and this is a big opportunity for content creators. AR has indeed taken the world by a storm.

Thank You!

from Tumblr https://generouspiratequeen.tumblr.com/post/636230778372636672

A guide to Geolocation API

A guide to Geolocation API:

Using the Geolocation API


The getCurrentPosition() method is used to return the user’s position. It takes two function as parameters – the first one returns the user’s position and the second is used to handle errors in case the browser fails to get the user’s location.

navigator.geolocation.getCurrentPosition(showPosition, showErrors)

The second parameter is option though.

Example 1

This is a very simple example of implementing the geolocation API without handling any errors. The getCurrentPosition() method returns object.

getLocation = ( ) => {
   if (navigator.geolocation) {
      navigator.geolocation.getCurrentPosition(position => {
         console.log(`Longitude: ${position.coords.logitude}`);
          console.log(`Latitude: ${position.coords.latitude}`);
else {
   console.log(`Geolocation is not supported by this browser.`);

On the line 2 we are checking if the geolocation is supported by the browser or not. If yes, we have logged the longitude and latitude by using the coords.longitude **and **coords.latitude properties.

The examples below illustrates how to handle errors:

getLocation = () => {
  if (navigator.geolocation) { 
   navigator.geolocation.getCurrentPosition(showPosition, showError); 
} else {
   console.log("Geolocation is not supported by your browser");

showPosition = (position) => {
  console.log(`Latitude: ${position.coords.latitude}
Longitude: ${position.coords.longitude}`);

showError => (error) => {
  switch(error.code) {
   case error.PERMISSION_DENIED:
     console.log("User denied the request for Geolocation.");break;
      console.log("Location information is unavailable.");break;
     case error.TIMEOUT:
       console.log("The request to get user location timed out. ");break;
      case error.UNKNOWN_ERROR:
        console.log("An unknown error occurred.");break;

getCurrentPosition() and other properties

The getCurrentPosition() method returns an object. We already saw two of its properties : coords.latitude and coords.longitude. The other properties of this object are:

Property Returns
coords.latitude The latitude as a decimal number
coords.longitude The longitude as a decimal number
coords.accuracy The accuracy of position
coords.altitude The altitude in meters above the mean sea level
coords.altitudeAccuracy The altitude accuracy of position
coords.heading The heading as degrees clockwise from north
coords.speed The speed in metres per second
timestamp The date/time of the response

watchPostion() and clearWatch()

The Geolocation object also has two more interesting methods:

watchPostion(): Returns the current position of the user and continues to return updated position as the user moves (like gps in vehicle).

clearWatch(): Stops the above(watchPostion()) method.


The example below shows the watchPostion() method. You can test this on a GPS enabled device like a smartphone.

getLocation = () => {
  if (navigator.geolocation) {
    naviagtor.geolocation.watchPosition(postion => {
     console.log(`Latitude: ${position.coords.latitude} 
Longitude: ${position.coords.longitude}` ); }); } else { console.log("Geolocation is not supported by this browser. "); } }

You can use a map API (like google maps) to present these informational real-time on a map.

from Tumblr https://generouspiratequeen.tumblr.com/post/636094878548393985

AWS API Architecture

AWS API Architecture:


This diagram gives an outline of the architecture and the resources used.

Regions & Availability Zones

Regions are a grouping of AWS resources in a certain geographical location. Within each region are clusters of data centres called availability zones.

Each region contains multiple availability zones which are physically separate from one another to ensure they are isolated from failures in other zones. The zones are then connected through ultra-low-latency networks.

Any AWS resource you create must be placed inside a VPC subnet (we’ll cover this), which must be located within an availability zone. It’s often a good idea to launch resources in multiple availability zones to ensure maximum uptime.

Virtual Private Cloud (VPC)

A VPC is a private virtual network where you can provision AWS resources – in essence your own private area within AWS. You have complete control over this environment including selecting IP addresses, route tables and network gateways.

Working with subnets we can setup private and public facing environments and control who can access these, and how. A VPC will span all of the availability zones in a region.


A subnet is a sub section of a network and can be either public or private. The key difference being, public subnets have a route to the internet whereas private ones do not and can only communicate with other subnets within the same VPC.

One or more subnets to each availability zone, but each subnet must reside entirely within one zone, and cannot span zones.

A quick analogy for everything covered so far

Imagine an office building as being a region – an outer layer that contains many things.

Each floor is an availability zone. A region can, and most likely will, have more than one zone, much like a building and floors!

Each department is a VPC, it can span across floors.

Finally a subnet is the office suite – it can only reside within a single floor.

Security Groups

Security groups act as a virtual a firewall – they allow and deny traffic. They operate on an instance level rather than a subnet level, so you would apply a security group to each instance you launch.

You apply rules to each security group to allow traffic to and from its instances. These rules can be modified at any time and will instantly apply to all instances associated with that security group. Multiple security groups can be added to each instance.

By default all inbounded traffic is denied, and all outbound is allowed.

To continue with our office building analogy, security groups would be the key cards to access different areas of the building.

Application Load Balancer

An application load balancer is essentially a server which fronts the application and forwards traffic to instances downstream – so in our case the Fargate instances. It’s used to spread the load across multiple instances whilst providing a single point of access.

It will also perform health checks on our instances and if one instance fails the load balancer will direct traffic to the remaining healthy ones. We provide a route – for example ’/health’, and if this returns a 200 it knows the instance is healthy.

Other features of the load balancer includes:

  • Providing support for SSL/HTTPS.
  • Works across availability zones, so if one zone goes down the load balancer will move all traffic to the other zones.
  • Separating public from private traffic.

Route 53 && Internet Gateways

Route 53 is a managed DNS – a collection of rules and records mapping IP addresses to URLs. It can be used for both public domain names and private domains – which can only be resolved by instances within the VPC. Route 53 can also provide load balancing through DNS and limited health checks.

Internet gateways provide the VPC with a route to the internet. If you think of your home network as a subnet, your modem would be the internet gateway proving access to your ISP and the wider internet.

Only one internet gateway can be applied to any VPC, and a gateway cannot be detached from a VPC whilst there are any active instances still running on it.

AWS Fargate

The final piece of the puzzle is Fargate – “a serverless compute engine for containers that works with both Elastic Container Service (ECS) and Elastic Kubernetes Service”. This is where we provision our API containers.

Fargate is a kind of evolution of Elastic Container Service. It’s managed by AWS – removing the need to provision and manage servers, and it scales up and down seamlessly, meaning you only pay for what you use. You can think of it as containers on demand – where everything is managed at a container level.

This quote perfectly surmises the reason to choose Fargate.


Hopefully this provides a high level understanding of some of the resources that go into setting up a containerized application on AWS. Of course, the best way to fully grasp these concepts is to dive in and get hands on.

from Tumblr https://generouspiratequeen.tumblr.com/post/636094878031511552

5 Powerful Instagram Apps For Business That Are Totally Free

5 Powerful Instagram Apps For Business That Are Totally Free:

Maximizing any marketing task’s potential comes down to having the right tools and Instagram apps for business that boost your posts’ quality and save time. From customizing your bio link, creative storytelling, scheduling of posts, content curation, insights, audience engagement, visual commerce, and data analytics, great apps can transform the way you market and manage businesses on Instagram. 

1. url.bio 

If you are looking for an app that offers multiple options to your followers whenever they click on your bio link, go for url.bio. This app lets you share all your important links and social media with just one url, and the best part is, there is no limit as to the number of links you’d like to add. As a social media marketing tool, url.bio gives you an advantage in affiliate marketing, creating business profiles, easy cross-promotion, and blogging promotion.

Url.bio is highly customizable, which is perfect for building your brand. You can customize the colors and the thumbnails of your links. The app also has a collection of themes, so you can choose a theme that suits your brand’s persona and the vibe you’re going for. 

Another great feature with url.bio is it allows you to track your analytics to see just how well your links are performing. Gaining valuable insights into your traffic helps know which content is performing best with your target audience. 

Compared to other Instagram apps for business, url.bio allows you to access analytics total views, analytics total clicks, custom themes, pre-designed themes, customer support, and unlimited links. However, some features that only url.bio offers include direct links, a link scheduler, priority links, link thumbnails, social media links, and analytics click-through rate. 

2. Over

Over is a popular graphic design Instagram app that helps Instagrammers, social media managers, and digital marketers create showstopping images that make audiences want to take that second look. Since the app is designed with Instagram stories in mind, Over is one of the most preferred apps for creating storytelling content, mostly because of its creative design suite. 

The app also recently released its branding toolkit called Over Pro that comes with a 30-day free trial period. The toolkit has seven modules that give how-to’s on the following: branding and creating your business’s story, creating a logo for your brand, picking the right graphic style for your brand, crafting images that best suits your brand, choosing the suitable typeface, colors, and templates to shape your brand voice.

Source link: https://www.madewithover.com/create/brand-new-brand-toolkit 
Source link: https://www.madewithover.com/ 

What’s great about this app is the plethora of personalized touches available. There are hand-curated videos, font collections, and graphics you can choose from. With Over’s easy-to-use tools for blending, creating layers, and masking, you can easily professionalize your images in minutes. 

Source link: https://www.instagram.com/p/CGpnUYjgaKD/ 

A quick note, though, when using Over for business and commercial use, be sure to read the entire terms and conditions to get acquainted with the permissions and limitations in using the tool. You are in the clear as to using Over’s services for personal and commercial use “except when the Service Content is used to create end products for sale where the lifetime sales of the end product for sale exceed 400 units…” Read the full text here.  

3. Hootsuite 

As one of the top Instagram apps for business in social media management, Hootsuite is one of the most complete. You can bring all your social channels into one dashboard where you can do everything from writing new posts, reading and tracking content, viewing post stats, and scheduling content. The platform allows you to support Twitter, Facebook pages, LinkedIn pages, Instagram, WordPress blogs, and more. 

Source link: https://signupnow.hootsuite.com/newbranding-selfserve-apac-noncore-usd-branded-sem/

Hootsuite does the job of scheduling your social posts. You can keep your social presence active 24 hours a day since you can automatically schedule hundreds of posts across your social media accounts all at once. Content creation is also one of the tool’s best features. You can easily manage social content by staying on message with pre-approved content from your team posts stored in your cloud file service. You can tag, search, and check usage stats of your content in a breeze. This makes it easy to track and improve your social ROI. 

Source link: https://hootsuite.com/platform/analyze 

Hootsuite’s comprehensive reporting gives you a bird’s eye view of the impact of your social media campaigns. Conversions are measured by social channels and have separate ROI between paid and owned media. As a monitoring and community management tool, Hootsuite allows you to find and filter social conversations by keywords, hashtags, and locations to hear what people say about your industry, competitors, and, most importantly, your brand. 

Source link: (screenshot taken from featured video, “ Introduction to Hootsuite Analytics”) — https://blog.hootsuite.com/social-media-report-template-guide/ 

Hootsuite is built to answer your business needs while optimizing your social media strategies. Pros include an easy-to-use interface and dashboard that integrates a wide range of social channels. Its app directory provides access to more than 100 apps so you can monitor various channels. Since it is a web-based tool that is compatible with all browsers, no extra software is needed. Also, Hootsuite’s collaboration tools make it easy for you to organize and monitor tasks. Weekly analytics reports are sent weekly by email. 

However, since Hootsuite has many components, it takes a bit more time to learn and maximize its features. Adding more team members and signing up for analytics reports will entail higher costs. Also, it can be helpful to know that Facebook Analytics does not integrate with Hootsuite very well. 

4. Squarelovin  

If you are particular about getting in-depth data analytics and metrics on your Instagram posts, try Squarelovin. The app specializes in using authentic content from real users that provide social proof, inspiration, and trust on every channel. This is how Squarelovin naturally encourages engagement and ROI. 

Source: https://squarelovin.com/ 

How does it work? By capitalizing on Visual Commerce, Squarelovin collects and curates images shared by people worldwide (user-generated content) and allows you to choose which ones you like most, seamlessly allowing you to request rights, tag to products, and organize the best content you earned. You can make content from your consumers, approve or disapprove pictures, manage image rights, and curate content all in one dashboard. 

Source: https://squarelovin.com/ 

With one click, you will receive essential usage rights through a fully-automated process. This process lets you collect rights approved media at scale. You benefit from user-generated content (UGC) across all your relevant on-and offline marketing channels in the process. 

Source: https://squarelovin.com/visual-commerce/ 

With Squarelovin, you can link pictures and videos with the respective product data and make UGC shoppable without your buyers even noticing. These shoppable UGC posts gets integrated into your homepage, landing pages, and blogs. This way, you showcase highly relevant content while upgrading the customer experience. In the end, you can analyze and determine the success of your content quickly through rich analytics that covers traffic, clicks, conversions, and revenue. 

Source: https://squarelovin.com/visual-commerce/ 

Squarelovin is a free tool that guarantees visual content on all marketing channels, fast integration, unique layout, proper media usage rights, brand awareness, and advanced visual insights on content, products, and contributors. On the other side of that, note that the monthly analysis that Squarelovin offersdoesn’t start right away. It will begin after the first month of usage. 

5. Repost For Instagram 

Repost For Instagram is one of the most popular reposting apps. Sharing content to your feed becomes second nature because of how the app makes it so easy for you to share content. Aside from being well-designed and easy to use, the original Instagrammer is credited whenever you repost photos and videos on your Instagram feed. 

Source: https://play.google.com/store/apps/details?id=ventures.bench.repost&hl=en 

Repost for Instagram can repost media from private profiles and supports multiple media and IGTV posts. Once the app is downloaded and installed on your device, it is effortless to repost a video or photo you love. Simply copy the link to your clipboard from Instagram, and Repost for Instagram takes care of the rest. 

You can also choose whether to copy the caption to your clipboard, customize the colors, and modify the attribution marks’ positions. Once you click on the share icon, you can either post it as a standard post or share it with your story. The app has limited ads, so choosing what to repost and share is done at top speed. However, since the app has in-app purchases, some features may not be readily available. 

Free Instagram Apps For Business

Whatever you need to optimize the social media campaigns and marketing strategies, there are Instagram apps for business that you can choose from and combine to get the job done. These tools are what every savvy digital marketer needs to create compelling and engaging content as seamless, effective advertising on Instagram.

from Tumblr https://generouspiratequeen.tumblr.com/post/636004285280354304

The Top 5 Object Storage Tools for Developers

The Top 5 Object Storage Tools for Developers:


The Top 5 Object Storage Tools for Developers

Choosing a storage solution is one of the most significant decisions a developer (or development team) needs to make when building a web or mobile application.

As you can imagine, there are many different storage options.

In this article, we’ll briefly discuss two of the most used cloud solutions: block storage (also known as SAN or storage area network) and object storage. After this, we will go through my top 5 suggested object storage solutions.

There is a third type of storage that’s commonly used: file system storage. However, this mechanism can also concur with SANs and object storage, so we won’t go too deep into it.

What is Block Storage?

Block storage is a network of hard drives connected via fiber-optic network. This gives it an edge over copper cables due to the increased speed.

The reason it’s called block storage is that each file in this system is divided into “blocks” of data stored in a disk. Sectors in the disk hold onto individual blocks of data, and these blocks, when combined, form the whole file.

So while there are advantages of using SAN, like high scalability, it is costly and can get incredibly complex as the network grows.

What is Object Storage?

The defining feature of object storage is that, instead of storing files as blocks, data is stored as objects.

Typically these objects will have more data attached to them than the blocks used for block storage. The objects often include:

  • A blob which contains all the payload (i.e., image, video, text content)
  • Metadata, which tells us more about the file (timestamps, permissions, author, revision, and so on)
  • A universally unique ID (UUID)

One major advantage of this type of storage is that objects are easily obtained and found because of their UUID. With block storage, there’s a specific hierarchy of files that a user goes through before getting the data they need, which can considerably slow down data retrieval.

Now that we have that out of the way, here’s a list of my top 5 object storage tools for developers:

Amazon AWS S3

S3 is one of the pioneers of object storage. It manages gigantic loads of data from all over the world across hundreds of industries.


  • High reliability and durability as it stores S3 objects in copies across multiple systems.
  • Allows you to manage costs through its S3 Storage Classes, which provides different rates depending on access patterns.
  • Provides the highest security and protection for your data.

Google Cloud Storage (GCS)

Google offers four different storage types for business levels of all sizes. When you’re moving data across each of those storage types, it will provide you with the data lifecycle. With this, you can manage how long data should be stored until it has to be deleted.


  • You don’t have a minimum object size.
  • You have access to storage locations all around the world.
  • Very high durability and low latency.
  • Data has redundancy across several geographic locations.


LakeFS is an open-source tool that works with object storage data lakes. Data lakes usually store files or blobs in raw format centrally through a repository.

Data lakes, on their own, are limited by the lack of frequent communication between entities. LakeFS solves this by using data versioning.


  • Through S3 or GCS, it allows scaling up to Petabytes in size by using a system that mimics Git.
  • You can experiment as it provides you with a development environment with your data.
  • Since it uses a Git-like scheme, you can safely use new data in another branch without affecting the main branch. You can then, later on, merge it safely once each aspect of new data checks out (schema, etc.).


MiniIO is another open-source solution. It utilizes the Amazon S3 API, which makes it perfect for high scale projects that require super strict security.


  • It calls itself the world’s fastest object storage as it has a read/write speed of up to 183 GB.
  • It applies web scaling principles – a cluster can join forces with other clusters until it forms multiple data centers.
  • It’s Kubernetes friendly.
  • Because it’s open source, users can improve and freely redistribute it.


StackPath offers both a Content-Delivery Network service, Edge Computing, and an S3 compatible Object Storage. It touts itself as a cheaper option to Amazon S3 and other cloud providers.


  • It is six times faster than competing services, especially when combined with the CDN or the Edge Computing platform.
  • It is serverless, which means it needs no warmup.
  • It has 45 edge locations, which means your application is available worldwide with the same performance anywhere.

In Closing

There you have it – a short list of the top object storage tools that you can use for your next web or mobile project. Object storage has indeed proven a great way to store data when scalability is the most significant consideration.

from Tumblr https://generouspiratequeen.tumblr.com/post/636004284725641217

Development enviroment with Docker and Traefik

Development enviroment with Docker and Traefik:


The Problem

Local development with docker is nothing new.
Run a local container export some ports and here we go.
But I ran into the problem of working with multiple projects at the same time of conflicting ports on my docker host. And it is hard to remember which project is running on which port. Local domains would be a nice solution.
I will walk you through my local development setup with docker and traefik to solve this problem.


We will install a reverse proxy on our local machine to add a domain to our projects. We will do this by using Traefik (https://traefik.io/traefik).
Traefik calls itself a cloud native application proxy.
It is ideal for using in cloud context like Kubernetes or docker. Traefik itself is also a simple docker container. And this will be the only container to expose a port to our docker host. The containers of the different projects and the traefik container will be in the same docker network. Traefik will forward the requests from the client to the corresponding container.


  • docker
  • docker-compose
  • your IDE of choice


Our first step will be to create a docker network.
We will call it “web”. We are creating this network so that different docker-compose stacks can connect to each other.

docker network create web
Enter fullscreen mode Exit fullscreen mode

Now we are starting our traefik container.
We could do this by running a simple docker command, but in this case ware a using a small docker-compose file to configure our container.

version: '3'

    external: true

    image: traefik:v2.3
      - "--log.level=DEBUG"
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
    restart: always
      - "80:80"
      - "8080:8080" # The Web UI (enabled by --api)
      - web
      - /var/run/docker.sock:/var/run/docker.sock
Enter fullscreen mode Exit fullscreen mode

Save this file in a directory and start the container by typing

docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

After the container started successfully we can access the traefik dashboard via http://localhost:8080.

Now we are starting a small web project. For example this is only a small website. I will only show the docker-compose.yml file in this post. You can find the complete folder structure and traefik setup here: https://github.com/flemssound/local-dev-docker-traefik

version: '3.3'

    external: true

    image: nginx
      - web
    # Here we define our settings for traefik how to proxy our service.
      # This is enableing treafik to proxy this service
      - "traefik.enable=true"
      # Here we have to define the URL
      - "traefik.http.routers.myproject.rule=Host(`myproject.localhost`)"
      # Here we are defining wich entrypoint should be used by clients to access this service
      - "traefik.http.routers.myproject.entrypoints=web"
      # Here we define in wich network treafik can find this service
      - "traefik.docker.network=web"
      # This is the port that traefik should proxy
      - "traefik.http.services.myproject.loadbalancer.server.port=80"
      - ./html:/usr/share/nginx/html
    restart: always
Enter fullscreen mode Exit fullscreen mode

Now we can access our website via http://myproject.localhost.
Everything in the HTML folder is mounted to the public folder into the nginx container.
Instead of exposing the nginx port directly to our host we proxy it through traefik.
We can also see it in the traefik dashboard.

To create another project copy the myproject folder, adjust the docker-compose.yml and start it up. Now we have a second project running, for example under mysecondproject.localhost also on port 80, and we don’t have to worry about conflicting ports in our projects and can access them by their name.

from Tumblr https://generouspiratequeen.tumblr.com/post/636004284074459136

How Webpack uses dependency graph to build modules

How Webpack uses dependency graph to build modules:


In the above example the file bootstrap.main.ts is used as the entry point to build the dependency graph. Other files in the above example are all required in the main file.

So let’s see how this dependency graph is resolved and rendered such that all the files are loaded in correct order.

More about Dependency Graph

The graph we will refer here is directed acyclic graph in which the edges are connected in such way that each edge only goes one way. In directed acyclic graph it becomes difficult to traverse the entire graph starting from one point of the graph due to it’s acyclic nature.

But how the dependency graph is sorted?
Answer: Topological Sorting

So, your next question will be what is Topological Sorting 😅

What is Topological sorting and how it works?

Let us consider an example of directed acyclic graph to understand this algorithm.

In Topological sorting we take two data structures a set and a stack to maintain the order and keep track of the vertices.

The set will keep track of all the visited vertices while stack will have all the vertices in topologically sorted order.

I am going to refer the above mentioned graph for reference. So let’s start with Node E. In the beginning our visited set is empty so we will directly put E in the visited set. After E we will explore the children’s of E which are F and H. Since H is not in the visited set and has no children which means that it is fully explored, so we move H from set to stack.

Now next we move to next child of E which is F and check it’s occurrence in set. Since it is not present in set so we will add it in the set and look for the child nodes. F has a child node G so we will check in set and add that in the set. Again, Gdoes not have any child nodes so we will add that to the stack.

After moving G into the stack we move back to its parent which is F. All the children’s of F are explored so we put F into the stack and move to its parent E. Since all the children’s are already moved to stack so we will add E to the stack.

Now we will pick some other unvisited node so let’s pick B which has two children’s C and D. We will first check that if C is present in the set and will add it to the set as it is not present. After adding C to the set we will again check for the children’s of CE is the only child of C and since it is already present in the set so we will move C to stack.

Next we move towards the next child of B which is D we will check set first and since it is unavailable in the set we will add to the set. D has one child F and since it is already present in set we will add D to the stack.

With this all the children’s of B are fully explored so we will add B to the stack.

After completing this cycle we will move to the next unvisited node which is A. Since A has only one child which is present in the set so we will add A to the stack. The final order of set and stack will be like something like this.

The order in which the nodes will be rendered is A, B, D, C, E, F, G, H.

Note- There can be different order for the topological sorting it depends on how you pick the unvisited nodes

Consider all the nodes in the graph as modules which are dependent on one another. The directed vertices points the dependency relationship between modules. Webpack uses Topological sorting to resolve the dependency relationship and renders the modules in the order provided by the algorithm.

Hope this has given you brief insight about the execution and use of dependency graph by webpack.

from Tumblr https://generouspiratequeen.tumblr.com/post/636004283443183616

What is the Jamstack?

What is the Jamstack?:


What Is Jamstack?

Web technology evolves over time and new tools and architectures are created and designed to address different problems in legacy systems. These could be in the development process, with performance, cost, user experience, scalability, and other system considerations. Jamstack is part of this evolution. But what exactly is it?

Jamstack is a frontend stack/architecture. It’s an acronym that stands for Javascript, APIs, and Markup stack. In this architecture, markup that incorporates Javascript, is pre-built into static assets, served to a client from a CDN, and relies on reusable APIs for its functionalities.

In more common web architectures, a web page is built using resources like data from a database, templates, and other content every time it is requested from a server and the resultant output is returned to the client. Jamstack is different from these architectures in that a site is pre-built into static assets before deployment, and these assets are distributed through a CDN.

The markup in the stack is the pre-rendered HTML of a site. The most basic kinds of Jamstack sites are plain HTML files styled with CSS. Augmenting these static sites with Javascript makes them dynamic and adds interactivity to the content. The markup is compiled at build time and then deployed. Frontend build tools are often used to automate these processes.

In the Jamstack, the frontend and the backend are completely separate. The APIs are distinct reusable services that provide specific functionality to static sites like payments, authentication, search, image uploads, comment management, etc. They could be created in-house or provided by vendors. The decoupling of the frontend from the backend has created opportunities to use a wide range of APIs with the frontend. There all kinds of APIs available within the API economy like PaypalAlgoliaCloudinaryAuth0, etc.

Jamstack was coined as a term to better communicate what this new kind of decoupled architecture should look like. It is important to note that technologies that separated the frontend from the backend, prebuilt the sites, and distributed them over a CDN pre-dated the term. However, it was crucial to have the terminology to collectively describe all the tools that used this architecture, to make it easy to evangelize, and to set up best practices.

Jamstack sites can be built in different ways using different tools and technologies. The most common types of Jamstack site build tools include static site generators like HugoJekyllGridsome, etc… and headless content management systems (CMSs) like Strapi.

Challenges of Established Web Architectures

Traditional web architectures tend to be monolithic systems and their constituent parts are often tightly coupled. Let’s take the example below.

In this type of architecture, when a page request is made from the client, the webserver routes the request to the app server based on the URL. The app server then makes a data request to the database. Once the database returns data, the app server combines the data and page templates and renders them into a response. It then sends the response to the webserver which in turn passes it along to the client. In some architectures, cache layers may exist between some of the parts to facilitate quicker responses.

This tightly coupled architecture has inherent risks and complications. These range from impacts on how users experience the site, the site’s development, complexity, performance, and its developers and team experience.

Performance Issues
Whenever a user requests a web page, it has to be built each time, multiple parts of the system need to be involved, and the response is passed along to several layers before it gets to the client. This slows down page load time. Some systems add caching layers to address this but implementing consistent caching can be difficult leading to users receiving varying results. The tight coupling also makes it challenging to migrate to better frameworks, update dependencies, or address bugs because it may adversely affect other parts of the system and take a long time to accomplish. These performance issues lead to lower visitor conversion on these sites.

Security Risks
Traditional architectures are made up of several parts, increasing the surface area that needs to be monitored and secured. Since all parts of the system are involved when processing a request, they are all vulnerable and possibly open to security threats. As a result of tight coupling, when security vulnerabilities are identified, developers have to choose between patching them and potentially breaking the site with the change.

Complex and Expensive Scaling
Another consequence of tight coupling in this architecture is the expensive and complicated scaling. Because every response to a request needs to be built before serving, the parts of the architecture that build and serve responses need to be scaled to accommodate traffic increases. However, not all web pages need to be built for every request. A one-off build that generates a page to be served continually would suffice. As such, the part that builds the app and that one that serves it do not ideally receive the same traffic and should not be scaled at the same rate. But in this architecture, both are scaled proportionally. Scaling this architecture is generally pretty expensive because of the type of technology that’s part of it.

Complicated Development and Maintenance
Owing to tight coupling, it’s complicated to make changes, updates, and push out bug fixes because of the widespread effects it would have on the entire system. This impedes flexibility to implement designs and make regular updates to improve user experience and developer workflows.

Benefits of using Jamstack

In Jamstack, the frontend is completely separated from the backend. The frontend is prebuilt before deployment, APIs are used to provide services to it, and it is distributed through a CDN. So here’s how a typical page request would be handled.

When a client makes a request for a web page, the CDN looks up the pre-rendered web page that matches the URL and sends it as a response to the client. The simplicity of this architecture has a range of benefits.

Compared to traditional architecture, Jamstack drastically improves performance because its tiers are reduced. This means that responses to requests go through fewer tiers and are faster. The resources and time that would go into maintaining additional tiers in traditional architectures are invested in optimizing the remaining tiers. Since web pages are pre-rendered, no building happens for each request. This coupled with the fact that multiple tiers do not have to interface with each other to generate a response and sites are delivered through a CDN, contributes to faster responses. The potential for failures to occur is greatly reduced because of pre-deployment building and the minimized surface area of the architecture. Pre-building pages ensure that **any errors can be detected **early enough and fixed before users get to interact with the site.

Another benefit of the loose coupling of Jamstack is that it is easy to make changes, push updates, introduce new features, refactor code, and upgrade to better vendors without having to worry about breaking the system. However, the interface through which each of the components interacts must remain the same.

Reduced Cost
In Jamstack, as a result of the decoupling, whole tiers/components may cease to exist in the infrastructure. If they do exist, they receive less traffic compared to the frontend and hence do not need to be scaled at the same rate. This heavily reduces the machine, labor, and software costs of the infrastructure. Jamstack sites are overall cheaper to build, scale, and maintain.

Easier to Secure
Having fewer components, Jamstack sites have a significantly smaller surface area to secure, maintain, and monitor. There are fewer points of entry that attackers can exploit and are therefore less vulnerable. No code is run on a server to build pages, making it difficult to inject exploitative code in the site. Services are outsourced to vendors with domain expertise, who are better equipped to secure and maintain them.

Enhanced Team and Developer Productivity
The Jamstack is simple because it has fewer components and is easier to understand. As such, developing and maintaining a site that uses this architecture tends to be a bit more straightforward. Developers of the site do not need to be completely adept at how each and every part of the system works. Since the components are loosely coupled and their boundaries clearly delimited, developers can specialize in the parts they work on.

Given the decreased number of tiers in the architecture, fewer developers are needed to work on the site. It also eliminates the need to have very specialized developers like DevOps engineers, SREs, etc. on teams. Since Jamstack sites are pre-rendered, there’s no need to have replicated environments like development, staging, testing, etc. This substantially reduces the amount of work needed to set up and maintain these environments. Usually, with Jamstack sites, there’s just one development environment and a pipeline for deployment. The reduced workload frees up time that allows developers on the team to better focus on understanding sections of the system they work on.

Jamstack sites make it easy to introduce new features and designs, make upgrades, and maintain them because their components are loosely coupled. Compared to traditional web architecture which is tightly coupled, implementing new designs is often tough, takes a long time, and has multiple complications. Developers working on these systems are often exasperated which sometimes leading to churn. Recruiting to replace them can also be challenging since developers may not want to work on inflexible projects. Using Jamstack sites means site additions and improvements are made relatively fast, improving reliability.

Jamstack site tools and technologies are widespread and modern. Jamstack best practices outline workflows that ensure productive and effective development. Most importantly, Jamstack allows teams to outsource complex services to vendors who provide, maintain, and secure APIs used on their sites.

Jamstack with Static Site Generators

Static site generators are build tools that add content and data to templates and produce static web pages of a site. These generators can be used for Jamstack sites. They are especially useful for large websites with many pages since they automate builds. Some well-known site generators include Hugo, Gatsby, Jekyll, Next.js, etc. You can find an expanded list of these generators at this link.

Jamstack with Headless CMSs

A content management system controls the production and alteration of digital content. A CMS is headless when it lacks a frontend and only has a backend that adds, provides, and modifies the content through an API. In some instances, the headless CMS may be augmented with an admin portal for content organization, workflow setup, etc. Headless CMSs fall under the APIs in Jamstack. Strapi is a great example of a headless CMS. It’s open-source and the content on it is provided through GraphQL or REST.

Jamstack Conventions

To get the full benefit of Jamstack, it’s important to follow some best practices. These include:

  • Host your code in a shared repository and use a version control system like Git to make content collaboration and contribution easier.
  • Leverage build tools to complement your frontend and automate development tasks.
  • Automate site builds since content changes are made often.
  • Make use of atomic deployments. An atomic deployment means that changes are never live until all modified files are uploaded. They make sure your site is always consistent with your visitors.
  • Enforce instant cache invalidation for a consistent site.
  • Always rely on a CDN because it ensures your site is delivered fast to your visitors.


Jamstack is an architecture for static websites that are built before deployment and distributed through a CDN. Services to the website are provided through APIs and there is a complete separation of the backend and frontend. Jamstack sites have better performance, are easier to secure and scale, and cost a lot less than sites built with traditional architectures. They can be created using static site generators or with headless CMSs. To find out more about Jamstack sites head on over to jamstack.org.

from Tumblr https://generouspiratequeen.tumblr.com/post/636004282761609216

AWS – What, Why | Overview

AWS – What, Why | Overview:


I would personally suggest not going too deep in the links to different services mentioned as it may confuse you and I will be explaining those in future posts

In this post, I will try to explain what AWS is in the best and easy to understandable way possible.

What the heck is AWS ?

AWS basically stands for Amazon Web Services. In an essence, it’s Amazon having a lot of computers stacked in places called data centres across the globe and then offering these computers for use to the general public at justified rates.

Why do we need AWS ?

Here are some of the benefits of using AWS:

  1. You don’t have to worry about managing the hardware. For eg: let’s say during black friday sale your application goes viral with its offerings and you are receiving 10 times the usual traffic. In that case, you can’t possibly add that much hardware fast enough and it will impact your customers but if you use AWS then you can set it to auto-scale which will provision the hardware as per the demand or you can do it yourself with just one single click (it’s that easy 😉)
  2. Its cheaper to use AWS that to use your own hardware and there are a lot of surveys on the internet which have proven this 😌.
  3. AWS customers can enjoy a lot of services which if they try to develop on their own, it would take them a lot of time and then the time spent in managing those will be a lot but with AWS you can just use them with few clicks 😎 and they best part is they just configure so easily with other AWS resources.
  4. AWS is more cheaper, reliable than the competition always!
  5. Since AWS is the oldest player in the game, its offering are tested a lot by the customers and just for reference Netflix, Prime Video etc uses AWS for their infrastructure and I am sure you’ll agree with me that they works like charm (though the developers of these apps are also major contributors for this but AWS has empowered them 😀)

What are the offerings by AWS ?

The offerings by AWS can be divided into two parts:

  1. Resources
    This includes things like EC2 instances (can be thought of a 
    small server) which basically provision some hardware and we 
    are able to directly interact with that hardware.

  2. Services 
    This includes almost everything at AWS. Things like SQS queuesSNS topicslambdaS3 storage which at some lower level also provision some hardware but we are not incharge of managing that hardware and we can’t access it.

What are some of the bad points about AWS?

Here are a few:

  1. AWS may go down and your product may be impacted by this. You can thought of this as putting all your eggs in the same basket. To avoid such scenarios, make sure that you deploy your apps in different regions and availability zones which would minimise the impact on your service if something at AWS goes bad.
  2. Since resources are provisioned automatically (if you configured AWS to do so) sometimes, it may result in you getting hefty bills so my advise would be to add alarms on billing and also set provisioning of resources accordingly.
  3. [inspired by @wowik ] If data flow outside the AWS network(to the open internet) is also charged and it may happen that sometimes the amount you pay for the data transfer is equal to or more than what you pay for the resources 🙁.

That’s all from my side. In my future posts, I will try to explain some services from AWS. If you want to add something in it, feel free to comment or reach out and I will then make the changes accordingly.

from Tumblr https://generouspiratequeen.tumblr.com/post/636004282182811648