Setting up Express Server with TypeScript

Setting up Express Server with TypeScript:

 Express.js is a web application framework that is built on top of Node.js. It provides a minimal interface with all the tools required to build a web application. Express.js adds flexibility to an application with a huge range of modules available on npm that you can directly plug into Express as per requirement.

Step 1: Create a .gitignore file

Add node_modules/ and .env in it as we don’t want node modules to be pushed to GitHub and also our secret keys to be publicly available.


Step 2: Add dependencies

You may use yarn or npm (I am using yarn here).

yarn add for dependencies

yarn add -D for dev dependencies

NOTE: We might add more later on… and discuss them as we move along. Also, the version may be newer for you or some of the packages may be deprecated in the future. Also as we are using typescript we require type-definitions (@types) of all dependencies we have added

The dependencies shown below are the basic ones I think are required for the server to be up and running.

"dependencies": {
"colors": "^1.4.0",
"cors": "^2.8.5",
"dotenv": "^8.2.0",
"express": "^4.17.1",
"devDependencies": {
"@types/cors": "^2.8.9",
"@types/express": "^4.17.9",
"concurrently": "^5.3.0",
"nodemon": "^2.0.6"

Step 3: Create tsconfig.json file and add the following

Configuring TypeScript

You might want to look at the official documentation providing more insights for configuring TypeScript and study more parameters available and use according to your needs.

"compilerOptions": {
/* Basic Options */
"target": "es6" /* Specify ECMAScript target version. */,
"module": "commonjs" /* Specify module code generation. */,
"sourceMap": false /* Generates corresponding '.map' file. */,
"outDir": "./dist" /* Redirect output structure to the directory. */,
"rootDir": "./src" /* Specify the root directory of input files. */,
/* Strict Type-Checking Options */
"strict": true /* Enable all strict type-checking options. */,
/* Module Resolution Options */
"moduleResolution": "node" /* Specify module resolution strategy. */,
"baseUrl": "./" /* Base directory to resolve non-absolute module names. */,
"paths": {
"*": ["node_modules/", "src/types/*"]
} ,
"esModuleInterop": true ,
/* Advanced Options */
"skipLibCheck": true /* Skip type checking of declaration files. */,
"forceConsistentCasingInFileNames": true },
"include": ["src/**/*"],
"exclude": ["src/types/*.ts", "node_modules", ".vscode"]

Step 4: Create the main file

Create an src folder in your directory and add an app.ts file with the following contents to get your express server up and running.

Relative Path: src/app.ts

import express, { Application, json, Request, Response } from "express";
import "colors";
import cors from "cors";
import { config } from "dotenv";


const app: Application = express();


const PORT: string | number = process.env.PORT || 5000;
const ENV: string = process.env.NODE_ENV || "development";

app.get("/", (_req: Request, res: Response) => {
  return res.send("API Running...");

app.listen(PORT, () =>
    ` 📡 Backend server: `.inverse.yellow.bold +
      ` Running in ${ENV} mode on port ${PORT}`

Step 5: Setting up running scripts

Add the following to the package.json file

"scripts": {
"watch-ts": "tsc -w",
"server": "nodemon dist/app.js",
"dev": "concurrently -k -p \"[{name}]\" -n \"Typescript,Node\" -c \"blue.bold,yellow.bold\" \"yarn run watch-ts\" \"yarn run server\" "

Now run “yarn run dev ” to start our server and voila we have our server up and running.

You should see this as your output in the terminal and dist/ directory should appear in your project containing all the JavaScript code with ES6 syntax.

Also, there’s a ts-node package that runs node server using TypeScript files without any need to generate any JavaScript files.

from Tumblr

Why You Should Be Writing React Custom Hooks

Why You Should Be Writing React Custom Hooks:


You’re probably familiar with built-in React hooks like useEffect and useState. But have you explored writing custom hooks? Or thought about why you would want to?

“No, why would I?” You might ask. And since you’re playing along so kindly, I’ll tell you!

Custom hooks are a handy way to encapsulate hook-related logic that can be re-used across components when using component composition isn’t really something that will help, make sense, or just “look” semantically right.

Think of a custom hook as a super-powered helper function. According to the rules of hooks, you can’t call a hook (like useEffect) in an ordinary helper function that is declared outside of a component. But you can call hooks inside custom hooks!

Additionally, if you have a component in which you have two or more separate pieces of useEffect logic going on, you might want to consider putting them into custom hooks to separate and name them, even if this isn’t logic that will be shared by any other component.

This is much like encapsulating logic into a well-named function for the sake of readability and code organization. After all, it’s a bit tough to read a string of useEffect routines and understand what’s going on. But if, on the other hand, you have one called something like useSyncCustomerRecordStore, then your consumer code is that much more readable.

Headless Components

It’s not quite a perfect comparison, but in a way, you can think of custom hooks as being a bit like headless components. Mostly because they can call hooks themselves, such as useEffect and useState. These built-in React hooks can work in custom hooks the same way they work in components.

The difference between a custom hook and a component is that a custom hook will return values, not React components or markup. In this way, they’re sort of like component helpers.

The Shape Of A Custom Hook

Custom hooks are really just:

  • Functions whose names begin with ‘use…’
  • Functions which can call other hooks

A simple custom hook might look like this:

// Custom hook code
function useMyCustomHook(someDataKey) {

    const [someValue, setSomeValue] = useState(null);

    useEffect(() => {
    }, [someDataKey]);

    return someNewValue;

// Consumer component code
function MyAwesomeComponent({someDataKey}) {

    const someValue = useMyCustomHook(someDataKey);

    return (

The new value is {someValue}

); }

Example: Page Data

I’m currently working on an enterprise application suite realized in the form of micro-service applications. To the user, it seems like one large application, but really, under the hood, it’s a collection of several independent React apps.

These apps need to refer to each others’ pages with links and common titles, and that data — called pageData — is set up in a context provider so that any component at any level in the apps can access it with a useContext hook.

Now, it is pretty simple to use this data without writing a custom hook. All a consumer component has to do is import the PageDataContext and then call useContext on it, like this:

// External Libraries
import React, { useContext } from 'react';

// App Modules
import PageDataContext from './PageDataContext';

function MyComponent() {

    const pageData = useContext(PageDataContext);

    return (


); }

Okay, So Why Use A Custom Hook For This?

Okay, so that’s pretty simple, right? It’s only three lines of code: two import statements, and a call to useContext. In that case, why am I still recommending a custom hook for a situation like this?

Here are a few reasons, from least to most important:

Eliminating Boilerplate Adds Up

If you just look at this one example, I’m only eliminating one line of boilerplate, because I will still have to import my custom hook, usePageData. I only really eliminate the line that imports useContext.

So what’s the big deal? The thing is, just about every page in my enterprise app suite needs to use this pageData object, so we’re talking hundreds of components. If we eliminate even one line of boilerplate from each one, we’re talking hundreds of lines.

And believe me, just writing that extra line every time I create a new page feels that much more annoying, so there’s a sort of pscychological/motivational benefit that adds up over time, too.

Well-Named Functions

If you’ve used hooks like useEffect much in your code, you’ve probably come across situations where there are two or three pieces of useEffect logic (either in separate calls to useEffect, or combined into one). This quickly gets hard to take in when you’re reading the code.

If you’re like me, you wind up putting comments about each piece of useEffect logic, such as:

    // Get the page data
    useEffect(() {
        // ... stuff happens here

But one of the fundamental concepts of readable code is noticing where you’re writing blocks of comments in big dumping ground “main” type functions, and instead separating those pieces of logic into their own, individual, well-named functions. Another developer reading your code is going to have a much easier time taking it all in when these details are abstracted away from the big picture. But when they’re ready to drill into detail, they can go look at the function declaration.

The same is true of custom hooks. If I see this in the component code, I have a pretty good idea of what is going on:

   const pageData = useGetPageData();


I’ve saved the most important reason for last, and that’s that it is good to encapsulate the logic in one place. Sure it’s only two lines of code, but what if we decide to store pageData in a Redux or Mobx store instead of React Context?

If we’re already using a custom hook, no problem! We just change the internal code in the hook and return the same pageData object back to the consumer code. What we don’t have to do is go and update hundreds of components to import, say, useSelector, and then call it instead of useContext.

What useGetPageData Looks Like

It’s dead simple! Just:

// External Libraries
import { useContext } from React;

// App Modules
import PageDataContext from './PageDataContext';

function useGetPageData() {
    return useContext(PageDataContext);

Other Things You Can Do With Custom Hooks

The example I gave for page data is intentionally very basic, but there are many more useful things you can do with custom hooks, such as encapsulating shared logic for updating and reading Redux state. Just think of anything you want to do with hooks but for which you want to avoid a bunch of copy/paste boilerplate, and you’re set to start getting creative with it.

from Tumblr

Archunit: Validate the architecture of our projects

Archunit: Validate the architecture of our projects:


When you create a microservice or a library, you start defining the location of each group of elements (enums, classes, interfaces) in one or more packages and the best practices, i.e. if you use Spring Boot the controllers should be annotated with @RestController. At the beginning, everything will be fine because a little group of developers works on the project. However, when you start adding developers of your team or from another team in the company add new functionalities is when the error appears.

To prevent any error in the code of the project, someone in your team needs to review all the changes before merging in master. This approach looks good at the beginning but there are some problems:

  • Really time-consuming tasks
  • The rules could change in the future so everyone needs to know the new rules before checking any code.

How Can You Resolve This Problem In A Simple Way?

At the back of your mind, the first solution would be tools like CheckstylePMD, or Findbugs which does a statical analysis of the code but these tools do not have the possibility to check the structure of the project.

Some years ago Archunit appeared which is a free and extensible library for checking the architecture of your project using just a unit test. This library gives you the chance to check the layer’s structure (which package/class depends on another), valid annotations in each class, check for cyclic dependencies, and more.

To start your Archunit tests you will need to add the dependency related to the version of Junit that you use.

    archunit-junitX //"X" could be 4/5

After adding the dependency your first Archunit test should look like the example below:

Let me explain in detail each block of code:

  1. With this annotation, you tell Archunit which package needs to validate all the rules, a good practice is to put the package which contains all the objects of your project. Also, practice is telling to Archunit not to validate the test just the code that does something.
  2. You need to indicate the field or method is an Archunit Test.
  3. This is the test with all the conditions to validate. The example checks that all the fields inside of any class in this particular package aren’t public. Just to clarify, “fields()” is a static import of “ com.tngtech.archunit.lang.syntax.ArchRuleDefinition.fields”

In this post, you can see some examples of which things you can validate with Archunit.

General rules of coding

Archunit defines some general rules of coding you can use to validate your project, most of them are defined in the class “GeneralCodingRules”. To use these rules you need to add a test to your project, some examples of these rules are:

Naming conventions

You can validate all the objects in one particular package contain a suffix, i.e. xxxController or xxxService.

Layered architecture

Good architecture needs to have different layers and each of them will only be able to access some other particular layers. You can define the layers and the relation between them, i.e. the controllers in one microservices can not be accessed for any other layer.

Return type

In the controller’s endpoints, you can validate the type of response. For example in the case of Spring Boot, a good practice is to return a “ReponseEntity”.


Sometimes you need to validate that some classes/interfaces/methods or, fields use a particular annotation.

Custom Rules

There are some cases when the methods to validate some rules are not cover with Archunit by default so you can create custom conditions like validating that your entities/model objects contain “equals” and “hashcode” methods.

To use this condition in one particular test you can do this:

Best Practices

There are some good practices in order to reduce the lines of code and organize the test depending on the types of rules. Here are some of them:

  • Define the package of the classes to analyze, the name of the packages or any constant in one class to have a place with all the constants you use in all the tests.
  • Define the rules in a way you can reuse them in different places, i.e. you can create a rule that checks that private methods are not allowed and call these methods from different places.
  • Try to put all the rules related to one validation group in one class, i.e. create a class that contains all the rules related to the controller validations in the case of one microservice.
  • Split each test class to find all the validations related to classes, fields, constructors, and methods in a simpler way. The important thing at this point is to be able to identify each group easier and to be able to prevent the same tests to be checked in a different way.


Archunit is a powerful library that gives you the chance to validate every rule you can imagine in your architecture and reduce the number of errors in your projects.

You need to understand if you add Archunit into an existing project, you would find some issues because most of your projects have some mistakes related to the structure of the names of the classes. It’s key to add the general rules at the beginning and then add the specific ones.

from Tumblr

Debugging Your React App

Debugging Your React App:

There are so many weird things that happen when you’re working on a React app. Sometimes you fix a bug in one place and it causes a bug in some seemingly unrelated area. It’s like a game of whack-a-mole and you can approach it with a strategy.

Take advantage of all the browser tools

You might be able to quickly find the problem by looking at the network tab in the developer tools of your browser and look for any odd status codes. You can also use the element tab to start tracking down the real issue. Sometimes you can start by inspecting an element and that will show you the right source file to dig in.

With React in particular, installing the React Dev Tools in Chrome is a game-changer. You can look at the props of components, find out which components are nested inside of each other, and see if things are being rendered as you expect. Use these tools to give you a great place to start looking for an issue.

Start in a file that comes from your browser tool search

Once you’ve figured out which file is a good starting point, jump in there and start looking for anything unusual. Are there any states that aren’t being updated? Is there a function that isn’t being called as expected? Is there an unnecessary div that’s throwing off your styles?

This is where the debugging effort can take you down the rabbit hole. Try and approach it as systematically as possible. If you found the method that’s causing issues, start drilling in there. Spend some time looking in this place, but if you notice you’re spending more than an hour there, it might be time to go down another rabbit hole.

Make sure you’re passing the right data in the right format

One of the things you have to deal with when working with JavaScript is that it isn’t a strongly-typed language. That means the shape of your data can change at any time and cause the strangest things to happen and silently cause errors. Many times this is how we end up with those undefined values that we know for a fact have real values.

Using Typescript is one way around this, but if your project isn’t in a place to start integrating that, you’ll have to pay attention to any changes to APIs you work with. A common thing that happens is that there are changes on the back-end that don’t get communicated to the front-end developers. So make sure you check your data before you start a major refactor.

Check any parent components

Sometimes the real issue isn’t with the component or function you’re looking at. One good example is if you can’t get position: sticky to work. There might be some parent element high up in the DOM tree that has an overflow: hiddenproperty set. This can be true for a number of issues.

You might have a context that is pulling from the wrong data source or it doesn’t actually have state hooks set up like you thought it would. When you’ve torn apart a file looking for the bug, try going up a level. The root cause could be buried in a place you wouldn’t suspect.

Compare files

Many times our components and views are created using similar architectures. As an app grows, it’s not uncommon for a view or component to fall out of the standard set up. Check that the problem file looks similar to other files like it. Finding those clues by looking for examples from other parts of the app will rule out the simple stuff early on.

Having this kind of uniformity in a codebase helps find and prevent issues because you can visually spot the difference between files. Maybe there’s a prop not being passed to the right component or maybe there’s a component that should be used instead of what’s in place.

Check your packages

There are some packages that aren’t compatible with each other. That could be the problem if you’ve drilled down in the code and landed in the node_modulesfolder. This is a deeper issue and one that might lead to crawling through Stack Overflow. To start a check for this, take a look at the versions in your package.json and then take a look at the current version on the npm site.

You might find that your installed version is out of date or that you’re not using the package you thought you were. When your debugging leads you here, it’s time to start looking for workarounds or replacements.

Those miscellaneous checks

Sometimes there are just weird things combining to make the perfect bug storm. If you’re having issues with data loading, make sure it’s not a CORS or permissions problem. If you can’t figure out why those styles aren’t quite right, check for styles on the parent components.

Having routing issues? Check that the routes are defined in the correct place with the right components. Maybe the state management approach in the app is a little difficult to understand, so add comments as you figure things out. That will pay off tremendously in the future.

Debugging is hard. There are bugs that take time to track down, but these steps will give you a good checklist to get started. When you’ve been hitting your head against the desk for too long trying to fix a bug, get up and walk away for a while. After you’ve taken a break, moved around a bit, and maybe had a snack, come back and see if these tips help!

from Tumblr

Emphasize backup for container data protection

Emphasize backup for container data protection:


As container adoption continues to grow, admins have to rethink their data backup and protection strategies. Luckily, there are some best practices and tools that can help.

Industry buzz around containers and Kubernetes can make it difficult for IT professionals to uncover best practices to use these technologies. This is especially true in the critical area of backup and recovery.

Containers and Kubernetes, frequently used interchangeably in the context of storage and data protection, are not the same technology. Kubernetes is a tool to manage and orchestrate the execution of containers. Containers require data protection, including backup, because, while their data has been predominantly ephemeral, persistent storage for containers has come into focus.

It can be challenging for admins to establish the need for container data protection as part of an overall backup and recovery strategy. At many organizations, data protection is an afterthought for new workloads, especially since, traditionally, containers have not used persistent storage. In addition, Kubernetes clusters are architected for high availability. Some IT professionals, as a result, assume container environments do not require backups.

This couldn’t be further from the truth. Container-based workloads are beginning to generate mission-critical data and require persistent storage. This data must be recoverable in the face of disruptive events, including infrastructure outages, cyberattacks and accidental or malicious data deletion.

Container data protection challenges and tips

Container environments operate at massive scale in terms of the number of container instances that comprise various application components. Rather than an entire app being mapped to a set of VMs or to a physical system, various app components are distributed across multiple systems for fault tolerance and load balancing. This creates challenges for hypervisor- or system-centric backup tools, especially those that require an agent on each server. Admins can also create and destroy containers instantaneously and at will, resulting in a dynamic environment that changes frequently.

It can be challenging for admins to establish the need for container data protection as part of an overall backup and recovery strategy.

To establish a container data protection strategy, IT professionals must protect everything the application or database needs to run. These resources lie both inside and outside of the container cluster in external storage and databases. It is difficult to achieve consistent backups for applications or databases due to the rate at which data changes. Also, it is not feasible to install an agent to execute an application-consistent backup on each container.

Always test for backup integrity and recoverability. Container data protection should be part of the overall backup and recovery implementation – from the beginning. This provides confidence in the ability to recover, as well as clarity in the recovery process.

Key players

A variety of vendors provide container backup capabilities, resulting in a crowded market. Container data protection vendors including Cohesity, Commvault, Dell Technologies, Druva, HYCU, IBM, Veeam, Veritas and Zerto are adding Kubernetes support. Newer vendors specializing in Kubernetes backup include Trilio; Kasten, which was acquired by Veeam in 2020; and Portworx, which was acquired by Pure Storage in the same year.

To choose a vendor, first consider how the container data protection tool will fit into the organization’s broader backup and recovery implementation. In addition, ensure the product can meet required service levels for recovery point objectives and recovery time objectives.

from Tumblr

How to make a random password generator using javascript

How to make a random password generator using javascript:

So today we are doing to build a random password generator using html css js so lets start

At first lets see the folder structure


in the project root make an index.html file and make a css file in css folder and a js file in js folder and for copying the password we need an clipboard image download it

open project in the code editor

code .

import css and js in the index.html file

now lets start codding.Write the entire html

After it we want to code the css so lets start. copy the entire style.css from here

* {
  margin: 0;
  padding: 0;
  font-family: Consolas;
  user-select: none;

body {
  display: flex;
  justify-content: center;
  align-items: center;
  height: 100vh;
  background: #f8f8f8;

.inputBox {
  position: relative;
  width: 450px;

.inputBox h2 {
  font-size: 28px;
  color: #333333;

.inputBox input {
  position: relative;
  width: 100%;
  height: 60px;
  border: none;
  margin: 15px 0 20px;
  background: transparent;
  outline: none;
  padding: 0 20px;
  font-size: 24px;
  letter-spacing: 4px;
  box-sizing: border-box;
  border-radius: 4px;
  color: #333333;
  box-shadow: -4px -4px 10px rgba(255, 255, 255, 1),
    inset 4px 4px 10px rgba(0, 0, 0, 0.05),
    inset -4px -4px 10px rgba(255, 255, 255, 1),
    4px 4px 10px rgba(0, 0, 0, 0.05);

.inputBox input::placeholder {
  letter-spacing: 0px;

.inputBox #btn {
  position: relative;
  cursor: pointer;
  color: #fff;
  background-color: #333333;
  font-size: 24px;
  display: inline-block;
  padding: 10px 15px;
  border-radius: 8px;

.inputBox #btn:active {
  background-color: #9c27b0;

.copy {
  position: absolute;
  top: 58px;
  right: 15px;
  cursor: pointer;
  opacity: 0.2;
  width: 40px;
  height: 40px;

.copy:hover {
  opacity: 1;

.alertbox {
  position: fixed;
  top: 0;
  left: 0;
  height: 100vh;
  width: 100%;
  background-color: #9c27b0;
  color: #fff;
  align-items: center;
  text-align: center;
  justify-content: center;
  font-size: 4em;
  display: none;
} {
  display: flex;
  justify-content: center;
  align-content: center;

now lets write the js file open it and start put the js code in

from Tumblr

Hunting and anti-hunting groups locked in tit-for-tat row over data gathering

Hunting and anti-hunting groups locked in tit-for-tat row over data gathering:


The leaking of internal documents has prompted a row between pro- and anti-hunting groups about the legality of the other’s data collection practices

Longstanding disagreements between hunting groups and anti-hunting activists have broken out into allegations of illegal data gathering from both sides.

Activists claim that two leaked internal documents created by pro-hunting groups suggest they are collecting and holding personal information on hunt saboteurs – activists that use sabotage as a form of direct action to stop illegal fox hunting – and further suggest the data is being shared with counter-terror police.

The saboteurs have accused the hunting groups of illegally collecting their personal information and are now seeking to instigate multiple claims under the General Data Protection Regulation (GDPR).

However, there are allegations these documents have been obtained illegally and may be the subject of a criminal investigation. The hunt supporters involved are also concerned how personal information about hunt members might be used by the activists, following a series of recent data breaches.

The hunting groups and hunt saboteurs deny engaging in any illegal activity.

Monthly reports on anti-hunting activists

The data collection practices of the Hunting Office (HO), a central organisation delegated to run the administrative, advisory and supervisory functions of the UK’s hunting associations, and the Countryside Alliance (CA), a campaign organisation with over 100,000 members that promotes rural issues, have been questioned by activists running a website called Hunting Leaks.

The website owners said that a monthly round-up of anti-hunting activity – which appears to have been shared via email with hunts across the UK – was passed on to Hunting Leaks by an undisclosed animal rights group.

The leaked document, a report on saboteur activity between 14 November and 12 December 2020, lists the names of anti-hunting groups, the names of 30 activists (some of which are referred to multiple times) and information about their vehicles, including registration numbers. 

It also includes information on the number of anti-hunting activists in attendance, details about their movements and activity on a given hunt day, as well as guidance for how hunt members should approach collecting information and video footage.

For example, it said that hunt members should not engage with saboteurs as they use heavily edited footage on social media to discredit hunts, and that any photographs or video footage should be gathered in a non-confrontational manner so as not to put hunt supporters in any dangerous situations.

The document further states the collection of this information has directly led to a number of successful convictions against hunt opponents.

In response to questions from Computer Weekly, the CA said its practices are compliant with GDPR, adding that detailed legal advice was sought and clear processes have been put in place.

A CA spokesperson said: “There is no justification for leaking and publishing the details of private individuals who support a lawful activity other than to intimidate them and leave them vulnerable to harassment. We hope the police will investigate this matter thoroughly and punish those responsible appropriately.”

Data protection concerns

The Telegraph reported on 22 January 2021 that three hunts – the New Forest Hounds, the Cottesmore Hunt and the Mendip Farmers’ Hunt – have all been hit by data breaches this month, where home addresses and contact details were published online by anti-hunting groups, although it is unclear if the breaches are connected to the leaks in question.

The breaches have prompted police to write to hundreds of hunt members, warning them to secure their digital footprints and review the security of their properties while the incidents are being investigated.

Benjamin Mancroft, chairman of the HO, told the Telegraph: “We are investigating the source of the breaches and cases of illegal hacking of our members’ private email accounts, as well as the theft of personal data by animal rights extremists.

“We take these security breaches very seriously – this coordinated attack from anti-hunt groups and resultant online harassment of our members and the potential exposure to violence and criminal damage is something our community should not have to tolerate.”

A post on the Hunting Leaks website said: “The Countryside Alliance claim to use the ‘prevention of crime’ as an excuse to get round GDPR regs [regulations] and then create these reports which they broadcast to every hunt in the country. Yet in this most recent of reports covering the entire country for four weeks, there is not one incident where saboteurs have been arrested and charged.”

Nothing we’ve been given looks like it’s from hacked emails. Any allegation that what we have put out has been illegally obtained is just that – an allegation
Ernie Goldman, Hunting Leaks member

Ernie Goldman, a member of Hunting Leaks, said the documents passed on to the group come from a variety of sources – including anonymous senders and even people clearly involved in hunting themselves – but he claimed there is nothing to suggest they were illegally obtained.

“Nothing we’ve been given looks like it’s from hacked emails – there are no screenshots of people’s private conversations, just hunt-related documents, spreadsheets, red books, subscriber lists,” he said. “Any allegation that what we have put out has been illegally obtained is just that – an allegation – and we do not recognise that allegation at all.”

Goldman added that the collection of information on anti-hunting activists could have serious consequences.

“West Midlands hunt saboteurs…recently had someone come to their home and pour petrol through their letterbox,” he said. “Elsewhere this season, a house was attacked that used to be occupied by hunt sabs [saboteurs] and is now occupied by a retired couple – a number of their windows were smashed.”

Lee Moon, a spokesperson for the Hunt Saboteurs Association, said the impact on individuals named in the report “can be life changing”.

“Sabs have had their homes damaged, vehicles set on fire and dead foxes left on their doorsteps. The number of sab vehicles stolen from outside our properties has also always seemed unusually high,” he said.

“The Hunt Saboteurs Association are grateful to Hunting Leaks for bringing this matter to the public’s attention, and we look forward to a full and thorough investigation by the ICO [Information Commissioner’s Office].”

Second leaked document

A second leaked document – a saboteur update from September 2019 that was also passed on to Hunting Leaks – suggests the CA holds a central database with further information about anti-hunting activists.

The document suggests the CA said it had created a dedicated database to hold relevant information on hunt saboteurs, which will be collected by hunt members and teams of evidence gatherers.

It suggested that, without this evidence, the CA’s lobbying, media and social media functions cannot be used to full effect against hunt saboteurs, and that the CA and HO were in agreement that the failure to collect such data has been the primary barrier to dealing with anti-hunting opposition.

It further claimed that data collected in the database is for a different purpose to the basic data that has long been collected and distributed on hunting days, which it said was solely for assisting hunt masters in deciding how to conduct their day’s activity.

It is currently unclear if this is referring to information collated in the leaked monthly report or something else, and what the exact relationship is between those reports and the central database.

The CA said it does not have a “systematic database of animal rights activists”.

“[As that document states], data held by the alliance is mainly photographs and videos of incidents and activity that has taken place at hunts or hunt property, together with supporting material from social media, that could lead to criminal prosecution for violence, public order, harassment or other offences,” a CA spokesperson said.

“The alliance does not hold a systematic database of animal rights activists, it does not hold sensitive personal information on such people, and it only shares relevant information with law enforcement bodies.”

The CA declined to say how many anti-hunting activists’ personal information it was holding.

Goldman noted that because 30 activists were already in the monthly report from November 2020, the database could contain information on hundreds of people.

“The CA database has been known about, or at least guessed, for some years,” he said. “There are a number of individuals in hunts across the country who are very overtly taking lots of pictures…The confirmation for us came in the past few weeks when we were passed on the CA document outlining the central database.”

The Countryside Alliance pointed out that another anti-hunt organisation, the League Against Cruel Sports, also gathers personal information “in relation to individuals that we investigate for animal cruelty in the name of ‘sport’ and in support of our campaigns”, as stated in the LACS privacy policy.

Hunting Leaks has denied any connection to the LACS.

Data sharing arrangement with counter-terror police

The leaked document also suggested that the new system had been promising so far, and that the CA is working with the Counter Terror Policing – National Operations Centre (CTP-NOC) after agreeing an information-sharing protocol to pass on data about extremist activity. It added that targeting ringleaders was a main priority.

A Counter Terror Policing spokesperson said that, prior to April 2020, it was the responsibility of the CTP-NOC “to gather and assess information in relation to protest groups on behalf of UK policing nationally, primarily to ensure they did not pose a security threat, but also to help forces facilitate lawful protest and prevent criminal activity.

“Since April 2020, the responsibility for gathering such information was handed to the National Police Operations Centre [NPoCC], allowing CTP-NOC to focus on keeping the country safe from terrorism,” it added.

The NPoCC was sent questions about the information-sharing protocol – including the nature of how it works, whether any action has been taken against anti-hunting activists as a result of the agreement, and why they are of interest to counter-terror police – but did not respond by time of publication. It is unclear when this agreement was made.

Kevin Blowe, a coordinator at the Network for Police Monitoring (Netpol), told Computer Weekly: “We don’t yet know for certain whether there is a formal information-sharing protocol between the Countryside Alliance and Counter-Terrorism Policing.”

Netpol has submitted a freedom of information request to the Metropolitan Police, which leads on counter-terrorism, seeking clarification.

“We do know, however, that the Countryside Alliance has a vested interest in portraying hunt saboteurs in the worst possible light because it is largely evidence gathered by the alliance’s opponents that puts pressure on extremely reluctant police forces to act on persistent breaches of the Hunting Act,” said Blowe.

“Perhaps because of the number of formerly very senior retired police officers involved in ‘field sports’ organisations and the power and influence of local landowners, we know that the police are deeply suspicious of hunt sabs.

“We have monitored this for a number of years and heard about the misuse of police stop and search powers intended for finding offensive weapons, wrongful arrests, the use of police drones and indifference by officers towards threats of violence. This all seems designed to actively frustrate efforts to investigate illegal fox hunting.”

Counter Terror Policing’s interest in anti-hunting activists

Goldman claimed that counter-terror police have been historically motivated to investigate the animal rights movement – something that has continued to the present day.

On 10 January 2020, the Guardian reported on a counter-terrorism police briefing document distributed to medical staff and teachers as part of the government’s anti-radicalisation Prevent programme.

In it, Counter-Terrorism Policing listed a number of groups it viewed as “extremist”, including Extinction Rebellion, Stop the Badger Cull and the Hunt Saboteur Association, alongside fascist groups such as Combat 18 and Generation Identity.

If the relationship between the police and the Countryside Alliance has led to any exchange of information about hunting’s opponents, this is a disturbing example of political policing using sympathetic allies to try to quash dissent
Kevin Blowe, Network for Police Monitoring

“Police forces traditionally have seen hunt saboteurs as the people breaking the law – that should, of course, have changed with the Hunting Act coming into force in 2005,” said Goldman.

“There are certainly far fewer arrests of hunt sabs, but the police are still very much seen by sabs to be in the pocket of the hunts with the high-up connections, rich landowners, judges, high-ranking police officers, etc., all riding with hunts.”

In late November 2020, two secretly recorded Zoom webinars hosted by the HO appear to show some of the UK’s leading hunt personnel, including high-ranking former police officers, discussing how to avoid prosecution for allegedly illegal fox hunting, as well as how to use trail hunting as a “smokescreen” to disguise their activities from authorities. 

ITV later reported the webinars were being investigated by police officers in conjunction with the Crown Prosecution Service to see if any criminal offences have taken place.

“As was evidenced in the leaked Hunting Office webinars last year, most hunts have spent the past 15 years building up a smokescreen to hide their criminal acts,” said Moon.

“Hunt saboteurs are out there doing the police’s job for them by stopping the hunts, yet week after week we are targeted by police who seem to be in the hunts’ pockets. To find this high-level engagement between the police and the CA at least starts to make sense of the regular police bias we experience out in the fields.”

Blowe added: “If the relationship between the police and the Countryside Alliance has led to any exchange of information about hunting’s opponents, this is a disturbing example of political policing using sympathetic allies to try to quash dissent.”

Goldman said Hunting Leaks’ aim is obtain a public explanation from the CA about why it is holding data on anti-hunting activists and to receive reparations for the victims.

“We plan on doing this by publishing leaked data on hunts that we hold. We are publishing at a rate of about once a week and we will be doing this throughout the spring and summer, or until the CA capitulates,” he said.

from Tumblr

How Redux works ? (Only HTML & Pure JS)

How Redux works ? (Only HTML & Pure JS):

 This is a code example of Redux with only HTML & pure JavaScript. Code sandbox

    Redux basic example
    <script src="">

Clicked: 0 times + - Increment if odd Increment async

function counter(state, action) { if (typeof state === 'undefined') { return 0 } switch (action.type) { case 'INCREMENT': return state + 1 case 'DECREMENT': return state - 1 default: return state } } var store = Redux.createStore(counter) var valueEl = document.getElementById('value') function render() { valueEl.innerHTML = store.getState().toString() } render() store.subscribe(render) document.getElementById('increment') .addEventListener('click', function () { store.dispatch({ type: 'INCREMENT' }) }) document.getElementById('decrement') .addEventListener('click', function () { store.dispatch({ type: 'DECREMENT' }) }) document.getElementById('incrementIfOdd') .addEventListener('click', function () { if (store.getState() % 2 !== 0) { store.dispatch({ type: 'INCREMENT' }) } }) document.getElementById('incrementAsync') .addEventListener('click', function () { setTimeout(function () { store.dispatch({ type: 'INCREMENT' }) }, 1000) })
Enter fullscreen mode Exit fullscreen mode

The webpage looks like this

  1. createStore & counterReducer
// Counter reducer
function counterReducer(state, action) {
    if (typeof state === 'undefined') {
        return 0;
    switch (action.type) {
        case 'INCREMENT':
            return state + 1;
        case 'DECREMENT':
            return state - 1;
            return state;
// Create store
var store = Redux.createStore(counterReducer);
  • createStore receives a counterReducer function as a param and return an object called store.
  • This is the diagram of createStore function with mental model as a class. 

Here is simplified version of createStore in redux source code:

function createStore(reducer, initialState) {
  var currentReducer = reducer;
  var currentState = initialState;
  var listeners = [];
  var isDispatching = false;

  function getState() {
    return currentState;

  function subscribe(listener) {

    return function unsubscribe() {
      var index = listeners.indexOf(listener);
      listeners.splice(index, 1);

  function dispatch(action) {
    if (isDispatching) {
      throw new Error('Reducers may not dispatch actions.');

    try {
      isDispatching = true;
      currentState = currentReducer(currentState, action);
    } finally {
      isDispatching = false;

    listeners.slice().forEach(listener => listener());
    return action;

  function replaceReducer(nextReducer) {
    currentReducer = nextReducer;
    dispatch({ type: '@@redux/INIT' });

  dispatch({ type: '@@redux/INIT' });

  return { dispatch, subscribe, getState, replaceReducer };
  • currentReducer = counterReducer
  • currentState = preloadedSate
  • When store is created, it initially dispatch with action type is '@@redux/INIT'so that every reducer returns their initial state. In case counterReducer, it returns 0

What happens inside dispatch function ?

// Dispatch function inside Redux store
function dispatch(action: A) {    
    currentState = currentReducer(currentState, action)
    const listeners = (currentListeners = nextListeners)
    for (let i = 0; i < listeners.length; i++) {
        const listener = listeners[i]
    return action
  • The function currentReducer is called which is counterReducer
  • Because action type is @@redux/INIT and currentState is undefined, so counterReducer returns 0 as default value which is the initial state of the store.
  • Now, currentState is 0
  • After updating the state with initial value, it calls all listeners that is subscribing the store to notify.
var valueEl = document.getElementById('value')

function render() {
  valueEl.innerHTML = store.getState().toString()

  • In this case, we have render() function, it is called back and update the DOM element with the initial value.
  • Now in the browser, we will se the number 0 shown.

Updating state when action is sent

    .addEventListener('click', function () {
      store.dispatch({ type: 'INCREMENT' })
  • When users click on the button “+”, store dispatches the action with type 'INCREMENT' to the reducer of the store and the flow is the same as explanation above.
  • Function currentReducer is called with state is 0 and action’s type is 'INCREMENT'.
  • Because 'INCREMENT' is a case inside counterReducer function, so the new state now is equal to 0 + 1 and returned to the state of the store.
  • Next, again it notifies listeners to let them know state is updated successfully.
  • Now, in the screen we will see Clicked: 1 times
  • The flow is similar to other action types

So this is basically how Redux works under the hood. In real life project, Redux store may have multiple reducers and midleware, and 3rd-party libraries enhance Redux workflow. But at very its core that’s how it works basically !

from Tumblr

You Don’t Know JS: Scope & Closures: What’s the Scope?

You Don’t Know JS: Scope & Closures: What’s the Scope?:


Let’s now learn about the compilation of a program:

Compiling Code

  • Scope is primarily determined during compilation, so understanding how compilation and execution relate is key in mastering scope.
  • There are mainly three stages of compilation:
    1. Tokenizing/Lexing
    2. Parsing
    3. Code Generation


Breaking up a string of characters into meaningful (to the language) chunks, called tokens. For eg:

  var a = 2;

This program would likely be broken up into the following tokens: var , a , = , 2 , and ;. Whitespace may or may not be persisted as a token, depending on whether it’s meaningful or not.


Parsing is the process of taking a stream of tokens and turning it into a tree of nested elements, called the Abstract Syntax Tree or AST.

For example, the tree for var a = 2; might start with a top-level node called VariableDeclaration , with a child node called Identifier (whose value is a ), and another child called AssignmentExpression which itself has a child called NumericLiteral (whose value is 2 ).

Code Generation

Code generation involves taking an AST and turning it into executable code. This part varies greatly depending on the language, the platform it’s targeting, and other factors.

NOTE: The implementation details of a JS engine (utilizing system memory resources, etc.) are much deeper than we will dig here. We’ll keep our focus on the observable behavior of our programs and let the JS engine manage those deeper system-level abstractions.

Required: Two Phases

  • The most important observation we can make about the processing of JS programs is that it occurs in (at least) two phases: parsing/compilation first, then execution.
  • The separation of a parsing/compilation phase from the subsequent execution phase is an observable fact, There are three program characteristics you can observe to prove this to yourself: syntax errors, early errors, and hoisting.

Syntax Errors from the Start

  • Consider the program:
var greeting = "Hello";
greeting = ."Hi";
// SyntaxError: unexpected token .
  • When we try to execute this program it shows no output, but instead throws a SyntaxError about the unexpected . token right before the "Hi" string.
  • Since, JS is a compiled language and not interpreted (line by line), the string was not printed, and the program was executed as a whole.

Early Errors

  • Now, consider:
saySomething("Hello", "Hi");
// Uncaught SyntaxError: Duplicate parameter name not
// allowed in this context
function saySomething(greeting, greeting) {
  "use strict";
  • The "Howdy" message is not printed, despite being a well-formed statement. Instead, just like the snippet in the previous section, the SyntaxError here is thrown before the program is executed.
  • In this case, it’s because strict-mode (opted in for only the saySomething(..) function here) forbids, among many other things, functions to have duplicate parameter names; this has always been allowed in non-strict-mode.
  • Here also, we can observe that the code was first fully parsed and then only the execution began. Otherwise, the string "Howdy" would be printed.


  • Finally, consider:
function saySomething() {
  var greeting = "Hello";
    greeting = "Howdy"; // error comes from here
    let greeting = "Hi";
// ReferenceError: Cannot access 'greeting' before initialization
  • The noted ReferenceError occurs from the line with the statement greeting = "Howdy".
  • What’s happening is that the greeting variable for that statement belongs to the declaration on the next line, let greeting = "Hi", rather than to the previous var greeting = “Hello” statement.
  • Here also, we can notice that the JS engine could only know, at the line that error is thrown, that the next statement would declare a block-scoped variable of the same name ( greeting ) is if the JS engine had already processed this code in an earlier pass, and already set up all the scopes and their variable associations.

Compiler Speak

  • Let’s now learn how the JS engine identifies the variables and determines their scopes as the program is compiled.
  • Let’s first see an example:
var students = [
  { id: 14, name: "Kyle" },
  { id: 73, name: "Suzy" },
  { id: 112, name: "Frank" },
  { id: 6, name: "Sarah" },

function getStudentName(studentID) {
  for (let student of students) {
    if ( == studentID) {

var nextStudent = getStudentName(73);

// Suzy
  • All occurrences of variables/identifiers in a program serve in one of two “roles”: either they’re the target of an assignment or they’re the source of a value.
  • If a variable is being assigned a value, then it is a target otherwise a source of value.


  • In the above code, since the students and nextStudent variables are assigned a value so they are both targets.
  • There are three other target assignment operations in the code that are perhaps less obvious. One of them:
for (let student of students) {

This statement assigns a value to student for each element of the array students.

Another target reference:


Here, the argument 73 is assigned to the parameter studentID.

The last target reference in the program is:

function getStudentName(studentID) {

function declaration is a special case of a target reference. Here the identifier getStudentName is assigned a function as a value.

So, we have identified all the targets in the program, let’s now identify the sources!


  • The sources are as follows:
for (let student of students)

Here the student is a target but the array students is a source reference.

if ( == studentID)

In this statement, both the student and studentID are source references.


student is also a source reference in the return statement.

In getStudentName(73)getStudentName is a source reference (which we hope resolves to a function reference value). In console.log(nextStudent)console is a source reference, as is nextStudent.

NOTE: In case you were wondering, idname, and log are all properties, not variable references.

Cheating: Runtime Scope Modifications

  • Scope is determined as the program is compiled, and should not generally be affected by runtime conditions.
  • However, in non-strict-mode, there are technically still two ways to cheat this rule, modifying a program’s scopes during runtime.
  • The first way is to use the eval(..) function that receives a string of code to compile and execute on the fly during the program runtime. If that string of code has a var or function declaration in it, those declarations will modify the current scope that the eval(..) is currently executing in:
function badIdea() {
eval("var oops = 'Ugh!';");

badIdea(); // Ugh!
  • If the eval(..) function was not present, the program would throw an error that the variable oops was not defined. But eval(..) modifies the scope of the badIdea() function at runtime.
  • The second way to cheat is the with keyword, which essentially dynamically turns an object into a local scope — its properties are treated as identifiers in that new scope’s block:
var badIdea = { oops: "Ugh!" };

with (badIdea) {
  console.log(oops); // Ugh!
  • The global scope was not modified here, but badIdea was turned into scope at runtime rather than compile-time, and its property oops becomes a variable in that scope.

NOTE: At all costs, avoid eval(..) (at least, eval(..) creating declarations) and with. Again, neither of these cheats is available in strict-mode, so if you just use strict-mode (you should!) then the temptation goes away!

Lexical Scope

  • JS’s scope is determined at compile-time, the term for this kind of scope is lexical scope.
  • “Lexical” is associated with the “lexing” stage of compilation, as discussed earlier in this chapter.

NOTE: It’s important to note that compilation doesn’t do anything in terms of reserving memory for scopes and variables.

from Tumblr

JavaScript News and Updates of January 2021

JavaScript News and Updates of January 2021:

 Hello everyone! The new year has just begun but the JavaScript world is already buzzing with exciting news and updates. While exploring various sources I have learned some interesting JavaScript stuff and eager to share it with you.

Get ready to learn about key findings from two fresh reports on JavaScript trends, a new micro-framework from the DHTMLX team, a major update of the Snowpack build tool, and the latest web initiatives. In addition, I have also collected some tips and articles that will help to broaden your knowledge and skills in JavaScript.

Let’s roll!

New Tools and Updates

Highlighting New JavaScript Trends

The turbulent year of 2020 is over, but we continue analyzing changes that have been reshaping the JavaScript landscape over the last twelve months to better understand the upcoming trends. For this purpose, I offer to take advantage of two recently released JavaScript surveys.

State of JS is one of the most popular and insightful JavaScript research projects based on real user feedback. The 2020 edition reveals some interesting facts. First of all, the majority of responders (more than 88%) still love using JavaScript for building apps, but at the same time, a considerable part of the survey participants (39%) think that JavaScript is overcomplicated. When talking about the state of things in JavaScript technologies, it is worth mentioning the growing popularity of Svelte and Snowpack, stable advancement of TypeScript and Next.js, as well as rising dissatisfaction with Angular, Redux, and Mocha. It is also nice to mention that is ranked in the top 3 of Blogs & Magazines read by developers. Get more valuable info on the State of JS page.

JavaScript Rising Stars 2020 is another interesting source of information on the JavaScript trends based on the number of stars added to various tools on GitHub during the last year. The most starred JavaScript instruments of 2020 are Deno (+30.2k☆), Vue.js (+22.5k☆), React (+19.8k☆), Playwright (+19.7k☆), VS Code (+19.1k☆). Due to a different approach used for the preparation of this report, the results in key categories may differ from the State of JS outcomes, but it still can be useful to take a look at the full report.

Introducing DHTMLX Optimus

DHTMLX Suite is a popular JavaScript library that provides a collection of lightweight and highly customizable UI widgets for implementing various functionalities in web projects. From now on, this library can be utilized more conveniently and productively with DHTMLX Optimus. It is a new micro-framework enabling web developers to build DHTMLX-based applications of any complexity with less time and effort.

The framework allows making the most of modern coding technologies such as ES6 classes, JS modules, and Webpack bundler. Thus, it is possible to define the front-end structure of a specific web app with a set of independent components (modules) and reuse them later in other projects. Optimus can also be employed in combination with any server-side technology. Here is a step-by-stepinstruction on how to start using Optimus in real-case scenarios. If you want to learn more about this framework, check out the release article.

Meet Snowpack v3.0

Fast, simple, efficient – these are three words that vividly describe a bundle-free and JavaScript ESM-powered build tool named Snowpack. The popularity of this tool is growing really fast and it is already considered by many as a viable alternative to more intricate and well-established tools such as Webpack. Earlier this month, Snowpack was updated to version 3.0.

This major release provides a set of powerful new features for faster web development. For instance, it offers a new way of loading any packages directly into your project via steaming imports. Another remarkable feature of this update is a build-in optimizer based on esbuild. This novelty helps to prepare production builds much faster than Webpack or Rollup. Starting from v3.0, Snowpack also comes with a modified JavaScript API and Node.js runtime. Find more details in the release article.

W3C Launches MiniApps Working Group

World Wide Web Consortium (W3C), one of the main driving forces behind the evolution of web standards, has recently established the MiniApps Working Group. MiniApp is a relatively new and promising format of mobile apps combining advantages of web technologies (CSS, JavaScript, etc.) and enhanced user experience of native apps.

The primary goal of the W3C’s initiative is to work out specifications that will ensure maximal integration of MiniApps with the web architecture, better interoperability between various MiniApp platforms, and more active promotion of this technology among web developers. Read this material to get a deeper insight into the plans of the MiniApps Working Group.

Open Web Docs – a New Project for Improving Web Documentation

A coalition of large technology companies has recently announced the launch of the Open Web Docs project. This undertaking aims to provide better maintainability and sustainability of Web-API and JavaScript documentation for web platform technologies. The list of founders and contributors to this project includes Google, Microsoft, Mozilla, W3C, and others. It is expected that Open Web Docs will support and closely cooperate with the main documentation platforms such as MDN Web Docs. Additional information on the project and its strategy for 2021 can be found in this article.

Useful Tips and Articles

Selecting Development Tools for Business Web Apps

There are many reasons why JavaScript is continuously ranked high among programming languages in various surveys. One of the key factors in favor of JavaScript is its huge ecosystem and plenty of tools that significantly simplify various development stages. But the abundance of coding technologies raises a serious question of choice for developers.

When talking about building business apps, standard technology stacks are also frequently complemented with special JavaScript UI libraries. Such UI components help to implement comprehensive functionalities for project management, scheduling, data visualization, etc. This article contains a lot of useful information on top web development tools, including JS UI libraries, that suit well for developing business apps.

Comprehensive Guide to Front-End Performance Optimization in 2021

High performance is probably one of the key success indicators of any web application. As web development technologies are becoming more sophisticated, web developers have to consider a growing number of metrics, tools, and front-end techniques to achieve optimal application performance. Otherwise, it is hard to expect that end-users will get quick and seamless experience. Where to start improving performance? What are possible bottlenecks? And how to keep your app fast enough on a long-term basis? These and many other related questions are addressed in this great front-end performance checklist for 2021.

Best JavaScript Charting Tools for Visualizing Data in Business Apps

It is hard to imagine any modern business web application without data visualization capabilities. Using various types of charts, it is possible to present complex data in a straightforward way. However, it may take a lot of time to implement a charting functionality from scratch. Therefore, web developers frequently utilize ready-made JavaScript charting components to save time and exclude unnecessary coding errors. This article highlights the most popular JavaScript libraries dedicated specifically to visualizing data in web apps.

How to Deal with Memory Leaks in Web Apps

When building a single-page application (SPA) some web developers pay little attention to keeping the app’s memory usage low. It can lead to memory leaks. This kind of issue can cause increased resource consumption in users’ devices, poor runtime performance, or even program crashes. If you are interested in learning techniques that will help to detect and fix memory leaks in SPAs, the blog post prepared by Nolan Lowson is exactly what you need.

Insight into JavaScript SEO

All owners of online businesses are undoubtedly interested in getting the highest search engine rankings to attract more potential customers. It can be hardly achieved without proper JavaScript SEO optimization. But SEO specialists frequently face many challenges on the way to making JavaScript content SEO-friendly. To do the job right, it is necessary to clearly understand how search engines crawl and index JavaScript, ways to facilitate this process, common obstacles in JavaScript SEO, and their solutions. The author of this article has a lot of experience in search marketing and shares valuable tips on optimizing JavaScript code for SEO.

from Tumblr