Sunday, November 24, 2019

Banana in a box for Angular 1.x code monkeys :)

Two-way databinding, used to be this simple in AngularJS 1.x:

<input ng-model="todo.completed" type="checkbox" />


Angular 2 made the two-way databinding explicit, which can be accomplished by either of the following:

<input [checked]="todo.completed" (input)="todo.completed = $event.target.checked" type="checkbox" />

<input [ngModel]="todo.completed" (ngModelChange)="todo.completed = $event" type="checkbox"/>


Still pining for the old days? You can do Angular 1's terser approach in Angular 2:
<input [(ngModel)]="todo.completed" type="checkbox" />

That's called banana in a box. Just put parenthesis (a.k.a. banana) around ngModel, otherwise it won't work.


It's true, we are code monkeys :)


Using the ngModel approach produces an error if FormsModule is not imported:

Can't bind to 'ngModel' since it isn't a known property of 'input'


To solve that, import FormsModule from "@angular/forms", a good place to import it is from app.module.ts, on entry point of the app.

Monday, October 14, 2019

Absolute import with Next.js and TypeScript

Make Next.js project TypeScript-capable by following this:

https://nextjs.org/blog/next-9#built-in-zero-config-typescript-support


Create shared directory in project’s root directory

Under shared directory, create randomizer.ts:

export function randomizer() {
    return 4;
}


Create deeply/nested directories under components directory

Under components/deeply/nested directory, create SampleComponent.tsx:

import React from "react";

// import { randomizer } from "../../../shared/randomizer";

import { randomizer } from "love/randomizer";

export function SampleComponent() {
    const a = randomizer();

    return (
        <div>
            <strong>I am strong {a}</strong>
        </div>
    );
}

Edit pages/index.tsx, then add SampleComponent, e.g.,

<p className="description">
    To get started, edit <code>pages/index.js</code> and save to
    reload.
</p>

<SampleComponent />


Add this import statement to pages/index.tsx:

import { SampleComponent } from "../components/deeply/nested/SampleComponent";

Edit tsconfig.json and add the following under compilerOptions:

"baseUrl": ".",
"paths": {
    "love/*": ["shared/*"]
}

Webpack must be informed of TypeScript’s paths, this can be done by modifying webpack’s resolve.alias object. To modify webpack’s configuration, create next.config.js in project’s root directory, then put the following:

const path = require("path");

module.exports = {
    webpack(config, { dev }) {
        config.resolve.alias = {
            ...config.resolve.alias,
            love: path.resolve(__dirname, "shared")
        };

        return config;
    }
};

Run yarn dev, everything shall work.


To avoid manual synchronization of tsconfig’s paths and webpack’s resolve.alias, use the following helper: (sourced from: https://gist.github.com/nerdyman/2f97b24ab826623bff9202750013f99e)

Create resolve-tsconfig-path-to-webpack-alias.js in project's root directory:

const { resolve } = require('path');

/**
 * Resolve tsconfig.json paths to Webpack aliases
 * @param  {string} tsconfigPath           - Path to tsconfig
 * @param  {string} webpackConfigBasePath  - Path from tsconfig to Webpack config to create absolute aliases
 * @return {object}                        - Webpack alias config
 */
function resolveTsconfigPathsToAlias({
    tsconfigPath = './tsconfig.json',
    webpackConfigBasePath = __dirname,
} = {}) {
    const { paths } = require(tsconfigPath).compilerOptions;

    const aliases = {};

    Object.keys(paths).forEach((item) => {
        const key = item.replace('/*', '');
        const value = resolve(webpackConfigBasePath, paths[item][0].replace('/*', '').replace('*', ''));

        aliases[key] = value;
    });

    return aliases;
}

module.exports = resolveTsconfigPathsToAlias;

Change next.config.js to following:

const path = require("path");

const resolveTsconfigPathsToAlias = require("./resolve-tsconfig-path-to-webpack-alias");

module.exports = {
    webpack(config, { dev }) {
        config.resolve.alias = {
            ...config.resolve.alias,
            ...resolveTsconfigPathsToAlias()
        };

        return config;
    }
};

Run yarn dev, everything shall work.


Sample working code: https://github.com/ienablemuch/experiment-react-nextjs-typescript-absolute-import

Saturday, September 28, 2019

React projects' initial size with 4K cluster size

npx create-razzle-app a1-create-razzle-app
npx create-react-app a2-create-react-app
npx create-next-app a3-create-next-app
npx create-react-app a4-create-react-app-pnp --use-pnp


Developers-iMac:_experiment_ dev$ du -sh a1-create-razzle-app/
1.3G a1-create-razzle-app/
Developers-iMac:_experiment_ dev$ du -sh a2-create-react-app/
2.2G a2-create-react-app/
Developers-iMac:_experiment_ dev$ du -sh a3-create-next-app/
707M a3-create-next-app/
Developers-iMac:_experiment_ dev$ du -sh a4-create-react-app-pnp/
231M a4-create-react-app-pnp/

Friday, September 27, 2019

Customize webpack from create-react-app

$ npx create-react-app experiment-react-customized-webpack --scripts-version react-scripts-rewired --use-pnp

Create a directory named shared under src directory

Create randomizer.js under shared directory
export function randomizer() {
    return 4;
}

Create directories components/deeply/nested under src directory

Create SampleComponent.js under src/components/deeply/nested directory
import React from "react";
import { randomizer } from "../../../shared/randomizer";

export function SampleComponent() {
    const a = randomizer();
    return (
        <>
            <strong>I am strong {a}</strong>
        </>
    );
}


Run

It should show on the page:

I am strong 4


Change SampleComponent.js import statement:
import { randomizer } from "../../../shared/randomizer";

to
import { randomizer } from "love/randomizer";

Run

It will fail compiling


Change webpack.config.extend.js code to:

const path = require("path");
/**
 * Project's customized Webpack Configuration Extension
 * ----------------------------------------------------
 *
 * this file is heavily inspired by `react-app-rewired` mechanism.
 *
 * it simply gives you the chance to hook into the default Webpack
 * configuration as it is provided by `create-react-app`, and to
 * change it so to match your project's needs.
 *
 * If you want to check out the default values look into:
 * `./node_modules/marcopeg-react-scripts/config/webpack.config.${env}.js`
 *
 */
module.exports = (webpackConfig, env, { paths }) => {
    // here you can extend your webpackConfig at will
    webpackConfig.resolve.alias = {
        ...webpackConfig.alias,
        love: path.resolve(__dirname, "src", "shared")
    };
    return webpackConfig;
};

Run

It should show on the page:

I am strong 4




Thursday, August 29, 2019

sh: react-scripts: command not found

sh: react-scripts: command not found
npm ERR! file sh
npm ERR! code ELIFECYCLE
npm ERR! errno ENOENT
npm ERR! syscall spawn




It could be that you are running a script from Visual Studio Code's NPM-Scripts sidebar. To fix that, add the following configuration in settings.json file under .vscode directory. If .vscode directory nor settings.json is not existing, just create those two.


{
    "npm.packageManager": "yarn"
}

Restart VS Code, then click the start script from NPM Scripts sidebar.

If you don't want to create .vscode directory and settings.json file, there's a global settings where npm.packageManager setting could be applied. This can be set by going to the menu under File > Preferences > Settings. (Code > Preferences > Settings on macOS). On User tab, type package manager. Change the Npm: Package Manager to yarn.



Restart VS Code.

Alternatively, it can be set directly on global settings.json file by opening the Command Palette. Command Palette can be accessed in the menu under View > Command Palette (keyboard shortcut: Control+Shift+P on Windows, Command+Shift+P on Mac). Then type settings



Choose Preferences: Open Settings (JSON).

Then add "npm.packageManager", then choose "yarn":



Restart VS Code

Thursday, August 15, 2019

Error: Invariant failed: You should not use Link outside a Router

As of the time of this writing, the latest version for connected-react-router is 6.5.2, and the latest version for react-router and react-router-dom is 5.0.1. However, I got the errors that says Link should not be used outside a Router when using those versions.



Cause of the error: https://github.com/ReactTraining/react-router/issues/6634#issuecomment-500084539


The example of connected-react-router (even if the latest version is 6.5.2) uses connected-react-router 6.0.0 though: https://github.com/supasate/connected-react-router/blob/d822fb9afd12e9d32e8353a04c1c6b4b5ba95f72/examples/basic/package.json#L26

That example uses react-router and react-router-dom 4.3.1.


To fix the error, just use the same version from the example. Pin the version of connected-react-router to 6.0.0, and both react-router, react-router-dom to ^4.3.1.

"dependencies": {
    "connected-react-router": "6.0.0",
    "history": "^4.9.0",
    "react": "^16.9.0",
    "react-dom": "^16.9.0",
    "react-redux": "^7.1.0",
    "react-router": "^4.3.1",
    "react-router-dom": "^4.3.1",
    "react-scripts": "3.1.1",
    "redux": "^4.0.4"
},


Then delete yarn.lock and node_modules:
$ rm yarn.lock
$ rm -rf node_modules

Then reinstall:
$ yarn

Then add prop-types:
$ yarn add prop-types


As of the time of this writing, I got ^15.7.2 for prop-types.


That's all.




Wednesday, August 14, 2019

npm install -g without sudo

On my machine, ~/.npmrc file is not existing. So I created one, and then add the following:


prefix=~/.npm


And in ~/.bash_profile (which is also not existing on my machine, so I created one), I added this line:

export PATH="$HOME/.npm/bin:$PATH"


Restarted terminal, npm install -g now works even without the sudo


Followed these steps:

https://medium.com/@ExplosionPills/dont-use-sudo-with-npm-still-66e609f5f92

https://www.reddit.com/r/applehelp/comments/9tf7hx/bashrc_on_mojave/



Thursday, August 8, 2019

All things being equal, they are not equal

I put all my shared files on external drive with exFAT file system. Found out later that the default cluster size (128 KB) that macOS allocates for exFAT drives is atrociously wasteful for npm-based projects. The external drive is permanently plugged to the back of iMac's Thunderbolt 3 port.

Image

For 128KB cluster size partition, a 3.5GB npm-based project consumes 81GB of space, yes that's 81GB not 8.1GB. For 4K cluster size partition, a 3.5GB npm-based project consumes 5.44GB only.

I resized the exFAT drive's cluster size to 4 KB, there is a problem when the computer boots though. The exFAT drive affects the boot speed of the computer. Do note that the computer is booting from internal drive (APFS), not from external exFAT drive.

From cold boot to Apple logo, the exFAT drive affects the boot speed of the computer to almost ten minutes.

Here are the other boot speed from various exFAT cluster sizes:

8 KB = 2 minutes 50 seconds
16 KB = 50 seconds
32 KB = 16 seconds
64 KB = 8 seconds
128 KB = 7 seconds


For other file systems cluster sizes:

4 KB APFS = 5 seconds
4 KB NTFS = 5 seconds

For both file systems, they don't affect the boot speed of the computer even if they are permanently plugged to Thunderbolt port.

Ultimately I decided to use NTFS and used third-party software on macOS that can read and write to NTFS drive.


Further space saved by deleting node_modules from npm-based projects by using Yarn Plug'n'Play:




Saturday, August 3, 2019

Could not read from remote repository

Got this error when pushing code to GitHub:

Developers-iMac:experiment-react-route-based-lazy-load dev$ git push -u origin master
Warning: Permanently added the RSA host key for IP address '13.250.177.223' to the list of known hosts.
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.


One solution that works:

Developers-iMac:experiment-react-route-based-lazy-load dev$ git remote set-url origin https://github.com/ienablemuch/experiment-react-route-based-lazy-load
Developers-iMac:experiment-react-route-based-lazy-load dev$ git push -u origin masterEnumerating objects: 28, done.
Counting objects: 100% (28/28), done.
Delta compression using up to 8 threads
Compressing objects: 100% (28/28), done.
Writing objects: 100% (28/28), 302.35 KiB | 12.09 MiB/s, done.
Total 28 (delta 2), reused 0 (delta 0)
remote: Resolving deltas: 100% (2/2), done.
To https://github.com/ienablemuch/experiment-react-route-based-lazy-load
 * [new branch]      master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.

Got the solution from: https://stackoverflow.com/questions/21255438/permission-denied-publickey-fatal-could-not-read-from-remote-repository-whil/55239661#55239661

Wednesday, July 24, 2019

Simple docker-compose example

Create a barebone Node.js app by following this: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/


Then update this section of code:

// App
const app = express();
app.get('/', (req, res) => {
  const message = `Hello world: ${process.env.GREETING_MESSAGE}`;
  res.send(message);
});


Create docker-compose.yml:

version: '3'
services:
  spiderman: 
    build: .
    ports:
        - "80:8080"
    environment:
        GREETING_MESSAGE: 你好世界


To test:
$ docker-compose up -d

Output:
Developers-iMac:experiment-nodejs-docker-compose dev$ docker-compose up -d
Creating network "experiment-nodejs-docker-compose_default" with the default driver
Building spiderman
Step 1/7 : FROM node:10
 ---> e05cbde47b8f
Step 2/7 : WORKDIR /usr/src/app
 ---> Running in 95c12e65805a
Removing intermediate container 95c12e65805a
 ---> be613c39b5ab
Step 3/7 : COPY package*.json ./
 ---> 619110050a58
Step 4/7 : RUN npm install
 ---> Running in e32232fc4928
npm WARN docker_web_app@1.0.0 No repository field.
npm WARN docker_web_app@1.0.0 No license field.

added 50 packages from 37 contributors and audited 126 packages in 1.148s
found 0 vulnerabilities

Removing intermediate container e32232fc4928
 ---> 92b713b4c799
Step 5/7 : COPY . .
 ---> ee82daff1c05
Step 6/7 : EXPOSE 8080
 ---> Running in a797e79b2f85
Removing intermediate container a797e79b2f85
 ---> 838f931be1ea
Step 7/7 : CMD [ "node", "server.js" ]
 ---> Running in c3b932bb7e0c
Removing intermediate container c3b932bb7e0c
 ---> 5a0d42751576
Successfully built 5a0d42751576
Successfully tagged experiment-nodejs-docker-compose_spiderman:latest
WARNING: Image for service spiderman was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating experiment-nodejs-docker-compose_spiderman_1 ... done


List the created docker image (think OOP's class):
Developers-iMac:experiment-nodejs-docker-compose dev$ docker images
REPOSITORY                                   TAG                 IMAGE ID            CREATED             SIZE
experiment-nodejs-docker-compose_spiderman   latest              5a0d42751576        3 minutes ago       906MB
node                                         10                  e05cbde47b8f        6 weeks ago         904MB



List the docker image's instance (think class' instance):
Developers-iMac:experiment-nodejs-docker-compose dev$ docker ps
CONTAINER ID        IMAGE                                        COMMAND                  CREATED             STATUS              PORTS                  NAMES
35c7818a4704        experiment-nodejs-docker-compose_spiderman   "docker-entrypoint.s…"   4 minutes ago       Up 4 minutes        0.0.0.0:80->8080/tcp   experiment-nodejs-docker-compose_spiderman_1


Visiting 0.0.0.0 (a.k.a. 127.0.0.1, a.k.a. localhost) on browser:



Saturday, July 6, 2019

https with node.js local environment

Do it step-by-step:
https://www.freecodecamp.org/news/how-to-get-https-working-on-your-local-development-environment-in-5-minutes-7af615770eec/

Or just use someone else's scripts (made by same author above (Daksh Shah)):
https://github.com/dakshshah96/local-cert-generator

There's a caveat by the author that his steps should not be used for production (I've yet to know why):

"In an Express app written in Node.js, here’s how you would do it. Make sure you do this only for your local environment. Do not use this in production."


The following steps are basically a copy of the Daksh's steps for creating SSL certificate. Just making sure in any case that those links become unavailable, I will still be able to generate SSL certificate for my local environment.


Let's follow and see what happens on each step.


This generates rootCA.key file:
Developers-iMac:experiment-nodejs-https dev$ openssl genrsa -des3 -out rootCA.key 2048
Generating RSA private key, 2048 bit long modulus
............................................................................................................+++
...................................+++
e is 65537 (0x10001)
Enter pass phrase for rootCA.key:
Verifying - Enter pass phrase for rootCA.key:
Developers-iMac:experiment-nodejs-https dev$ ls -la
total 768
drwxrwxrwx  1 dev  staff  131072 Jul  6 18:25 .
drwxrwxrwx  1 dev  staff  131072 May 30 08:47 ..
-rwxrwxrwx  1 dev  staff    1751 Jul  6 18:30 rootCA.key


This generates rootCA.pem file:
Developers-iMac:experiment-nodejs-https dev$ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
Enter pass phrase for rootCA.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:PH
State or Province Name (full name) []:
Locality Name (eg, city) []:
Organization Name (eg, company) []:
Organizational Unit Name (eg, section) []:
Common Name (eg, fully qualified host name) []:
Email Address []:
Developers-iMac:experiment-nodejs-https dev$ ls -la
total 1024
drwxrwxrwx  1 dev  staff  131072 Jul  6 18:25 .
drwxrwxrwx  1 dev  staff  131072 May 30 08:47 ..
-rwxrwxrwx  1 dev  staff    1751 Jul  6 18:30 rootCA.key
-rwxrwxrwx  1 dev  staff     956 Jul  6 18:31 rootCA.pem

Create server.csr.cnf file with this content:
[req]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn

[dn]
C=PH
ST=RandomState
L=RandomCity
O=RandomOrganization
OU=RandomOrganizationUnit
emailAddress=hello@example.com
CN = localhost


Create v3.ext file with this content:
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = localhost


Files:
Developers-iMac:experiment-nodejs-https dev$ ls -la
total 1536
drwxrwxrwx  1 dev  staff  131072 Jul  6 18:25 .
drwxrwxrwx  1 dev  staff  131072 May 30 08:47 ..
-rwxrwxrwx  1 dev  staff    1751 Jul  6 18:30 rootCA.key
-rwxrwxrwx  1 dev  staff     956 Jul  6 18:31 rootCA.pem
-rwxrwxrwx  1 dev  staff     214 Jul  6 18:33 server.csr.cnf
-rwxrwxrwx  1 dev  staff     200 Jul  6 18:34 v3.ext

This generates server.csr and server.key files:
Developers-iMac:experiment-nodejs-https dev$ openssl req -new -sha256 -nodes -out server.csr -newkey rsa:2048 -keyout server.key -config server.csr.cnf
Generating a 2048 bit RSA private key
.........................................+++
...............+++
writing new private key to 'server.key'
-----
Developers-iMac:experiment-nodejs-https dev$ ls -la
total 2048
drwxrwxrwx  1 dev  staff  131072 Jul  6 18:25 .
drwxrwxrwx  1 dev  staff  131072 May 30 08:47 ..
-rwxrwxrwx  1 dev  staff    1751 Jul  6 18:30 rootCA.key
-rwxrwxrwx  1 dev  staff     956 Jul  6 18:31 rootCA.pem
-rwxrwxrwx  1 dev  staff    1098 Jul  6 18:35 server.csr
-rwxrwxrwx  1 dev  staff     214 Jul  6 18:33 server.csr.cnf
-rwxrwxrwx  1 dev  staff    1704 Jul  6 18:35 server.key
-rwxrwxrwx  1 dev  staff     200 Jul  6 18:34 v3.ext

This generates rootCA.srl and server.crt files:
Developers-iMac:experiment-nodejs-https dev$ openssl x509 -req -in server.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out server.crt -days 500 -sha256 -extfile v3.ext
Signature ok
subject=/C=PH/ST=RandomState/L=RandomCity/O=RandomOrganization/OU=RandomOrganizationUnit/emailAddress=hello@example.com/CN=localhost
Getting CA Private Key
Enter pass phrase for rootCA.key:
Developers-iMac:experiment-nodejs-https dev$ ls -la
total 2560
drwxrwxrwx  1 dev  staff  131072 Jul  6 18:25 .
drwxrwxrwx  1 dev  staff  131072 May 30 08:47 ..
-rwxrwxrwx  1 dev  staff    1751 Jul  6 18:30 rootCA.key
-rwxrwxrwx  1 dev  staff     956 Jul  6 18:31 rootCA.pem
-rwxrwxrwx  1 dev  staff      17 Jul  6 18:36 rootCA.srl
-rwxrwxrwx  1 dev  staff    1306 Jul  6 18:36 server.crt
-rwxrwxrwx  1 dev  staff    1098 Jul  6 18:35 server.csr
-rwxrwxrwx  1 dev  staff     214 Jul  6 18:33 server.csr.cnf
-rwxrwxrwx  1 dev  staff    1704 Jul  6 18:35 server.key
-rwxrwxrwx  1 dev  staff     200 Jul  6 18:34 v3.ext


Open Keychain Access:


Then import rootCA.pem file using File > Import Items, the certificate will be added to Keychain Access:



Create app.js:
var fs = require('fs');
var express = require('express');
var https = require('https');

var certOptions = {
  key: fs.readFileSync('server.key'),
  cert: fs.readFileSync('server.crt')
};

var app = express();

app.get('/message', (req, res) => {
    res.json({message: 'Great'});
})

var server = https.createServer(certOptions, app).listen(443);


Run app.js:
Developers-iMac:experiment-nodejs-https dev$ nodemon app.js
[nodemon] 1.19.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node app.js`

Open https://localhost/message on your browser:



Safari:



To prevent that error from happening, double-click the certificate, then change the Trust settings to Always Trust:



Exit Keychain Access dialog, it will then prompt you with your super user password:




Reload the page:


Safari:



For some reasons, Firefox is uncooperative :)

Thursday, July 4, 2019

Session-less roles authorization with Passport using Authorization header + JWT

const express = require('express');

const bodyParser = require('body-parser');
const jwt = require('jsonwebtoken');
const randtoken = require('rand-token');

const passport = require('passport');
const { Strategy: JwtStrategy, ExtractJwt } = require('passport-jwt');

const refreshTokens = {};
const SECRET = "sauce";

const options = {   
    jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
    secretOrKey: SECRET
};

passport.use(new JwtStrategy(options, (jwtPayload, done) => 
{
    const expirationDate = new Date(jwtPayload.exp * 1000);
    if (new Date() >= expirationDate) {
        return done(null, false);
    }

    const user = jwtPayload;
    done(null, user);
}));


const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: true}));
app.use(passport.initialize());

app.post('/login', (req, res, next) => 
{
    const { username, password } = req.body;
    
    const accessToken = generateAccessToken(username, getRole(username));    
    const refreshToken = randtoken.uid(42);

    refreshTokens[refreshToken] = username;

    res.json({accessToken, refreshToken});
});

function generateAccessToken(username, role, expiresInSeconds = 60)
{
    const user = {
        username,
        role
    };

    const accessToken = jwt.sign(user, SECRET, { expiresIn: expiresInSeconds });
    
    return accessToken;
}


function getRole(username)
{
    switch (username) {
        case 'linus':
            return 'admin';
        default:
            return 'user';        
    }
}


app.post('/token', (req, res) => 
{
    const { username, refreshToken } = req.body;
    
    if (refreshToken in refreshTokens && refreshTokens[refreshToken] === username) {        
        const accessToken = generateAccessToken(username, getRole(username));
        res.json({accessToken});
    }
    else {
        res.sendStatus(401);
    }
});

app.delete('/token/:refreshToken', (req, res, next) => 
{
    const { refreshToken } = req.params;
    if (refreshToken in refreshTokens) {
        delete refreshTokens[refreshToken];
    }

    res.send(204);
});

app.post('/restaurant-reservation', passport.authenticate('jwt', {session: false}), (req, res) => 
{
    const { user } = req;
    const { guestsCount } = req.body;

    res.json({user, guestsCount});
});

app.get('/user-accessible', authorize(), (req, res) => 
{
    res.json({message: 'for all users', user: req.user});
});

app.get('/admin-accessible', authorize('admin'), (req,res) => 
{
    res.json({message: 'for admins only', user: req.user});
});


function authorize(roles = []) 
{
    if (typeof roles === 'string') {
        roles = [roles];
    }

    return [
        passport.authenticate('jwt', {session: false}),

        (req, res, next) => 
        {
            if (roles.length > 0 && !roles.includes(req.user.role)) {
                return res.status(403).json({message: 'No access'});
            };

            return next();
        }
    ];
}

app.listen(8080);


Test login for non-admin:
$ curl -i -H "Content-Type: application/json" --request POST --data '{"username": "richard", "password": "stallman"}' http://localhost:8080/login


Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 253
ETag: W/"fd-eFe0WyxYJMwm+IYU2RknNmwLN7c"
Date: Thu, 04 Jul 2019 11:18:34 GMT
Connection: keep-alive

{"accessToken":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InJpY2hhcmQiLCJyb2xlIjoidXNlciIsImlhdCI6MTU2MjIzOTExNCwiZXhwIjoxNTYyMjM5MTc0fQ.eVMIVLoq66wwnvX7R7_YE_Va5uUIupcWqZFJIql2VOo","refreshToken":"ZExtyNqF19XwCF8htNABs9rzwDV5lltlEQxGAGVaeV"}

Test non-admin role against user-accessible resource:
$ curl -i -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InJpY2hhcmQiLCJyb2xlIjoidXNlciIsImlhdCI6MTU2MjIzOTExNCwiZXhwIjoxNTYyMjM5MTc0fQ.eVMIVLoq66wwnvX7R7_YE_Va5uUIupcWqZFJIql2VOo" --request GET  http://localhost:8080/user-accessible


Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 103
ETag: W/"67-cGYUdSCA9Hfxbr3kmo6EfcwJGFk"
Date: Thu, 04 Jul 2019 11:19:09 GMT
Connection: keep-alive

{"message":"for all users","user":{"username":"richard","role":"user","iat":1562239114,"exp":1562239174}}


Test non-admin role against admin-accessible resource:
$ curl -i -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InJpY2hhcmQiLCJyb2xlIjoidXNlciIsImlhdCI6MTU2MjIzOTExNCwiZXhwIjoxNTYyMjM5MTc0fQ.eVMIVLoq66wwnvX7R7_YE_Va5uUIupcWqZFJIql2VOo" --request GET  http://localhost:8080/admin-accessible


Output:
HTTP/1.1 403 Forbidden
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 23
ETag: W/"17-wr3eIgD4+9Bp6mVHZCD0DXWfISk"
Date: Thu, 04 Jul 2019 11:19:16 GMT
Connection: keep-alive

{"message":"No access"}



Test login for admin:
$ curl -i -H "Content-Type: application/json" --request POST --data '{"username": "linus", "password": "torvalds"}' http://localhost:8080/login

Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 251
ETag: W/"fb-ffoTYONCm1BVpI+gaFqCXe7AX2g"
Date: Thu, 04 Jul 2019 11:23:05 GMT
Connection: keep-alive

{"accessToken":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImxpbnVzIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNTYyMjM5Mzg1LCJleHAiOjE1NjIyMzk0NDV9.hU09-ESu5VADYgC12R-CBtLga4lmGnpGC1AAwxg7t_Y","refreshToken":"8RItXk4v6kV8W68paZk3av34vj3oV5z1vnQdSLAZG7"}


Test admin role against admin-accessible resource:
$ curl -i -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImxpbnVzIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNTYyMjM5Mzg1LCJleHAiOjE1NjIyMzk0NDV9.hU09-ESu5VADYgC12R-CBtLga4lmGnpGC1AAwxg7t_Y" --request GET  http://localhost:8080/admin-accessible


Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 104
ETag: W/"68-MXNiAugqialVCopW9uv3AMKWHjU"
Date: Thu, 04 Jul 2019 11:23:42 GMT
Connection: keep-alive

{"message":"for admins only","user":{"username":"linus","role":"admin","iat":1562239385,"exp":1562239445}}


And of course user-accessible resource is available to admin role too:
$ curl -i -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImxpbnVzIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNTYyMjM5Mzg1LCJleHAiOjE1NjIyMzk0NDV9.hU09-ESu5VADYgC12R-CBtLga4lmGnpGC1AAwxg7t_Y" --request GET  http://localhost:8080/user-accessible


Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 102
ETag: W/"66-fLBX3f6AfyVZNVcfBNUakwJko5w"
Date: Thu, 04 Jul 2019 11:23:51 GMT
Connection: keep-alive

{"message":"for all users","user":{"username":"linus","role":"admin","iat":1562239385,"exp":1562239445}}

Session-less authentication with Passport using Authorization header + JWT

const express = require('express');

const bodyParser = require('body-parser');
const jwt = require('jsonwebtoken');
const randtoken = require('rand-token');

const passport = require('passport');
const { Strategy: JwtStrategy, ExtractJwt } = require('passport-jwt');

const refreshTokens = {};
const SECRET = "sauce";

const options = {   
    jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
    secretOrKey: SECRET
};

passport.use(new JwtStrategy(options, (jwtPayload, done) => 
{
    const expirationDate = new Date(jwtPayload.exp * 1000);
    if (new Date() >= expirationDate) {
        return done(null, false);
    }

    const user = jwtPayload;
    done(null, user);
}));


const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: true}));
app.use(passport.initialize());

app.post('/login', (req, res, next) => 
{
    const { username, password } = req.body;
    
    const accessToken = generateAccessToken(username, getRole(username));    
    const refreshToken = randtoken.uid(42);

    refreshTokens[refreshToken] = username;

    res.json({accessToken, refreshToken});
});

function generateAccessToken(username, role, expiresInSeconds = 60)
{
    const user = {
        username,
        role
    };

    const accessToken = jwt.sign(user, SECRET, { expiresIn: expiresInSeconds });
    
    return accessToken;
}


function getRole(username)
{
    switch (username) {
        case 'linus':
            return 'admin';
        default:
            return 'user';        
    }
}


app.post('/token', (req, res) => 
{
    const { username, refreshToken } = req.body;
    
    if (refreshToken in refreshTokens && refreshTokens[refreshToken] === username) {        
        const accessToken = generateAccessToken(username, getRole(username));
        res.json({accessToken});
    }
    else {
        res.sendStatus(401);
    }
});

app.delete('/token/:refreshToken', (req, res, next) => 
{
    const { refreshToken } = req.params;
    if (refreshToken in refreshTokens) {
        delete refreshTokens[refreshToken];
    }

    res.send(204);
});

app.post('/restaurant-reservation', passport.authenticate('jwt', {session: false}), (req, res) => 
{
    const { user } = req;
    const { guestsCount } = req.body;

    res.json({user, guestsCount});
});


app.listen(8080);

Do login:
$ curl -i -H "Content-Type: application/json" --request POST --data '{"username": "linus", "password": "torvalds"}' http://localhost:8080/login

Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 465
ETag: W/"1d1-fL53kZkGhDz1fE7e8SWj1fsPmqk"
Date: Thu, 04 Jul 2019 05:34:50 GMT
Connection: keep-alive

{"accessToken":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImxpbnVzIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNTYyMjE4NDkwLCJleHAiOjE1NjIyMTg1NTB9.BJzSGzbVgy_WZJAABoQ_xCPgf6OZgiezn7KxAiVrkm4","refreshToken":"KPImAwtuR4KMlU6RA7cdoouBJ3JY2XxiRzd4hTu3gU"}



Test authentication middleware:
$ curl -i -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImxpbnVzIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNTYyMjE4NDkwLCJleHAiOjE1NjIyMTg1NTB9.BJzSGzbVgy_WZJAABoQ_xCPgf6OZgiezn7KxAiVrkm4" \
-H "Content-Type: application/json" \
--request POST --data '{"guestsCount": 7}' \
http://localhost:8080/restaurant-reservation

Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 94
ETag: W/"5e-IgmolWlx16EymTgKApJWpv27g74"
Date: Thu, 04 Jul 2019 05:35:38 GMT
Connection: keep-alive

{"user":{"username":"linus","role":"admin","iat":1562218490,"exp":1562218550},"guestsCount":7}


Do again the command above after 60 seconds. Output:
HTTP/1.1 401 Unauthorized
X-Powered-By: Express
Date: Thu, 04 Jul 2019 05:36:24 GMT
Connection: keep-alive
Content-Length: 12

Unauthorized


As the existing access token expired, we need to get new access token using the refresh token:
$ curl -i -H "Content-Type: application/json" \
--request POST \
--data '{"username": "linus", "refreshToken": "KPImAwtuR4KMlU6RA7cdoouBJ3JY2XxiRzd4hTu3gU"}' \
http://localhost:8080/token


Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 191
ETag: W/"bf-is3S3uVXUUs7vW0DjF+cQ+bzj70"
Date: Thu, 04 Jul 2019 05:37:56 GMT
Connection: keep-alive

{"accessToken":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImxpbnVzIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNTYyMjE4Njc2LCJleHAiOjE1NjIyMTg3MzZ9.bsy2oAm8qU8R6xM5b4SpxqJm8l8Ca4ssRu6VMMtZZq4"}


Test authentication middleware using the new access token:
$ curl -i -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImxpbnVzIiwicm9sZSI6ImFkbWluIiwiaWF0IjoxNTYyMjE4NzY0LCJleHAiOjE1NjIyMTg4MjR9.Fw_HFTJmTc-dOjzExoJF_Wk2QGwYINaG_d6cuoc6fjQ" \
-H "Content-Type: application/json" \
--request POST --data '{"guestsCount": 7}' \
http://localhost:8080/restaurant-reservation

Output:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 94
ETag: W/"5e-E8689sgC1zfAYBFoQnj/jmIPd+4"
Date: Thu, 04 Jul 2019 05:39:54 GMT
Connection: keep-alive

{"user":{"username":"linus","role":"admin","iat":1562218764,"exp":1562218824},"guestsCount":7}


If we need the user to re-login, just revoke (delete) the refresh token:
$ curl -i -X DELETE http://localhost:8080/token/KPImAwtuR4KMlU6RA7cdoouBJ3JY2XxiRzd4hTu3gU

Output:
HTTP/1.1 204 No Content
X-Powered-By: Express
ETag: W/"a-bAsFyilMr4Ra1hIU5PyoyFRunpI"
Date: Thu, 04 Jul 2019 05:43:43 GMT
Connection: keep-alive


Test getting new access token:
$ curl -i -H "Content-Type: application/json" \
--request POST \
--data '{"username": "linus", "refreshToken": "KPImAwtuR4KMlU6RA7cdoouBJ3JY2XxiRzd4hTu3gU"}' \
http://localhost:8080/token

Output:
HTTP/1.1 401 Unauthorized
X-Powered-By: Express
Content-Type: text/plain; charset=utf-8
Content-Length: 12
ETag: W/"c-dAuDFQrdjS3hezqxDTNgW7AOlYk"
Date: Thu, 04 Jul 2019 05:44:24 GMT
Connection: keep-alive

Related code, roles authorization: https://www.anicehumble.com/2019/07/session-less-authorization-with-passport-authorization-header-jwt.html

Thursday, May 9, 2019

In For The Win!

Someone on stackoverflow observed that IN is faster than EXISTS on MySQL

select * from `replays` 
where id in (
    select replay_id from `players` 
    where `battletag_name` = 'test') 
order by `id` asc 
limit 100;

select * from `replays` 
where exists (
    select * from `players` 
    where `replays`.`id` = `players`.`replay_id` 
      and `battletag_name` = 'test') 
order by `id` asc 
limit 100;

EXISTS took 70 seconds. IN took 0.4 second.


Aside from IN being faster than EXISTS in most cases, readability of IN is a big win. And also with IN, queries can be made modular, e.g.,


with test_players as 
(
    select replay_id 
    from players 
    where battletag_name = 'test'
) 
select * 
from replays 
where id in (select replay_id from test_players)

Cannot do the above when using EXISTS.



Monday, April 29, 2019

Query Error: Error: ER_NOT_SUPPORTED_YET: This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'

Does not work on MySQL, works on Postgres

select
    t.player, 
    sum(case when t.eventid = 1 then t.points end) as event1,
    sum(case when t.eventid = 2 then t.points end) as event2,
    sum(case when t.eventid = 3 then t.points end) as event3,
    sum(case when t.eventid = 4 then t.points end) as event4,
    
    sum(
        case when t.points >= any(
            select best3.points 
            from tbl best3 
            where best3.player = t.player
            order by best3.points desc 
            limit 3
        ) then 
            t.points
        end
    )             
from tbl t
group by t.player

Output on MySQL 8.0:
Query Error: Error: ER_NOT_SUPPORTED_YET: This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'


Output on Postgres:
| player | event1 | event2 | event3 | event4 | sum |
| ------ | ------ | ------ | ------ | ------ | --- |
| 1      | 25     | 15     | 20     | 20     | 65  |
| 2      | 20     | 13     | 12     | 10     | 45  |


You can even made the code neater on Postgres by using FILTER:

Live test: https://www.db-fiddle.com/f/haMmw4S4f7XMqcBD8CDV7H/1


select
    t.player, 
    sum(t.points) filter(where t.eventid = 1) as event1,
    sum(t.points) filter(where t.eventid = 2) as event2,
    sum(t.points) filter(where t.eventid = 3) as event3,
    sum(t.points) filter(where t.eventid = 4) as event4,
    
    sum(t.points) filter(where
        t.points >= any(
            select best3.points 
            from tbl best3 
            where best3.player = t.player
            order by best3.points desc 
            limit 3
        )
    ) as best3          
from tbl t
group by t.player          

Solution on MySQL 8.0, works on Postgres too:

Live test: https://www.db-fiddle.com/f/4ufuFAXKf7mi5yefNQqoXM/2

with ranking as
(
    select 
        player,       
        rank() over(partition by player order by points desc) as xrank,
        points
    from tbl
)
,pick3 as
(
    select 
        player, 
        sum(points) as best3
    from ranking 
    where xrank <= 3
    group by player
)
select
    t.player, 
    sum(if(t.eventid = 1, t.points,0)) as event1,
    sum(if(t.eventid = 2, t.points,0)) as event2,
    sum(if(t.eventid = 3, t.points,0)) as event3,  
    sum(if(t.eventid = 4, t.points,0)) as event4,
    p.best3            
from tbl t
join pick3 p on t.player = p.player
group by t.player

MySQL 5.7 solution:

Live test: https://www.db-fiddle.com/f/4ufuFAXKf7mi5yefNQqoXM/15

select
    t.player, 
    sum(case when t.eventid = 1 then t.points end) as event1,
    sum(case when t.eventid = 2 then t.points end) as event2,
    sum(case when t.eventid = 3 then t.points end) as event3,
    sum(case when t.eventid = 4 then t.points end) as event4,
    
    sum(
        case when t.points >= (
            select best3.points 
            from tbl best3 
            where best3.player = t.player
            order by best3.points desc 
            limit 1 offset 2
        ) then 
            t.points
        end
    ) as best3            
from tbl t
group by t.player;

Output:
| player | event1 | event2 | event3 | event4 | best3 |
| ------ | ------ | ------ | ------ | ------ | ----- |
| 1      | 25     | 15     | 20     | 20     | 65    |
| 2      | 20     | 13     | 12     | 10     | 45    |

Friday, April 19, 2019

salary > avg(salary) conundrum

Beginners, and I think even some non-beginners would want this to work:

select *
from employees 
where salary > AVG(salary)

The problem is that avg is applied to the filtered rows. It will be circular how avg(salary) will get its value if avg is part of where clause. So to prevent that confusion, SQL standard disallows using aggregate functions in where clause.

Humans are lazy. Programmers are humans. Programmers are lazy.

I think that it is a missed opportunity that SQL didn't impose aliases when tables are referenced, otherwise they could introduce functionality that would have no ambiguity:

select e.*
from employees e
where e.salary > AVG(employees.salary)

Or perhaps a little OOP:
select e.*
from employees e
where e.salary > employees.AVG(salary)


That would mean, get all the employee (denoted by e) from employees whose salary is greater than the average salary of all employees.

Before you scoff that it would be super-duper hard for the RDBMS developers to parse things like that. Consider that, that sort of brevity can be achieved in C#'s Linq:

Live test: https://dotnetfiddle.net/uXJEZF

using System;
using System.Linq;

class Employee 
{
    public string Name { get; set; }
    public decimal Salary { get; set; }
}

public class Simple 
{
    public static void Main () 
    {
        var employees = new Employee[]
        {
            new Employee { Name = "John", Salary = 10 },
            new Employee { Name = "Paul", Salary = 9 },
            new Employee { Name = "George", Salary = 2 },
            new Employee { Name = "Ringo", Salary = 1 },
        };

        Console.WriteLine ("Average salary: {0}", employees.Average (x => x.Salary));

        var query = 
            from e in employees
            where e.Salary > employees.Average(x => x.Salary)
            select e;

        foreach (var e in query)
        {
            Console.WriteLine ("{0} {1}", e.Name, e.Salary);
        }
    }
}

Output:
Average salary: 5.5
John 10
Paul 9


If RDBMS have OOP and Linq syntax, it can prevent unusual request:



Live test: https://dotnetfiddle.net/PiANMM

Thursday, April 18, 2019

Dynamic unpivoting: SQL Server said, Postgres said

SQL Server version:

declare @tx table(
    id int identity(1, 1) not null,
    data varchar(100),
    column1 int,
    column2 int,
    column3 int
);

insert into
    @tx(data, column1, column2, column3)
values
    ('data1', 1, 2, 3),
    ('data2', 4, 5, 6),
    ('data3', 7, 8, null);

select
    a.id,
    a.data,
    c.item,
    c.value 
from
    @tx a
    cross apply (
        values
            (
                cast(
                    (
                        select
                            a.* for xml raw
                    ) as xml
                )
            )
    ) as b(xmldata)
    cross apply (
        select
            item  = xattr.value('local-name(.)', 'varchar(100)'),
            value = xattr.value('.', 'int')
        from
            b.xmldata.nodes('//@*') as xnode(xattr)
        where
            xnode.xattr.value('local-name(.)', 'varchar(100)') not in 
                ('id', 'data', 'other-columns', 'to-exclude')
    ) c



SQL Server can't include null values though.

Postgres version:

Note: Should run this first before being able to use hstore functionality: create extension hstore

create temporary table tx(
    id int generated by default as identity primary key,
    data text,
    column1 int,
    column2 int,
    column3 int
) on commit drop;

insert into
    tx(data, column1, column2, column3)
values
    ('data1', 1, 2, 3),
    ('data2', 4, 5, 6),
    ('data3', 7, 8, null);
    
with a as (
    select
        id,
        data,
        each(hstore(tx.*) - 'id'::text - 'data'::text) as h
    from
        tx
)
select
    id,
    data,
    (h).key as item,
    (h).value::int as value
from
    a 
-- this would work too:
-- where (h).key not in ('id', 'data', 'other-columns', 'to-exclude')



No problem with Postgres, it include nulls


If the number of columns to unpivot is not so dynamic, can do this in Postgres:

select tx.id, tx.data, x.*
from tx
join lateral (
 values
     ('column1', column1), 
     ('column2', column2), 
     ('column3', column3)
) as x(item, value) on true

Equivalent to SQL Server:
select tx.id, tx.data, x.*
from @tx tx
cross apply (  
    values
     ('column1', column1), 
     ('column2', column2), 
     ('column3', column3)    
) as x(item, value);

Both Postgres and SQL Server include nulls in result

https://stackoverflow.com/questions/55731155/combined-semi-transpose-of-a-data/55731461

Saturday, April 13, 2019

pgAdmin Internal Server Error

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

If you encountered this error on *nix-based system, just delete the .pgAdmin directory from your directory, i.e.,

$ rm -rf ~/.pgadmin


Note that you will need to re-enter your user postgres password when pgAdmin is launched

Friday, April 12, 2019

SQL Intersect

create table test (
  id integer,
  pid integer,
  name varchar(2),
  val integer
);

insert into test
  (id, pid, name, val)
values
  ('1', '1', 'aa', '10'),
  ('2', '1', 'bb', '20'),
  ('3', '1', 'cc', '30'),
  ('4', '2', 'aa', '10'),
  ('5', '2', 'bb', '20'),
  ('6', '2', 'cc', '30'),
  ('7', '3', 'aa', '10'),
  ('8', '3', 'bb', '20'),
  ('9', '3', 'cc', '999');

select distinct pid from test 
where
pid in (select pid from test where (name,val) = ('aa',10))     
and pid in (select pid from test where (name,val) = ('bb',20))
and pid in (select pid from test where (name,val) = ('cc',30));



-- works on all RDBMS
select pid from test where (name,val) = ('aa',10)
and pid in (
    select pid from test where (name,val) = ('bb',20)
    and pid in (
        select pid from test where (name,val) = ('cc',30)
    )
);


-- works on most RDBMS, MySQL has no INTERSECT
select pid from test where (name,val) = ('aa',10)
intersect
select pid from test where (name,val) = ('bb',20)
intersect
select pid from test where (name,val) = ('cc',30);


-- works on all RDBMS
select a.pid
from   test a, test b, test c
where  (a.name,a.val) = ('aa',10) 
and    (b.name,b.val) = ('bb',20) 
and    (c.name,c.val) = ('cc',30) 
and    (a.pid = b.pid and b.pid = c.pid);

-- same as above. for JOIN purists
select a.pid
from   test a 
cross join test b
cross join test c
where  (a.name,a.val) = ('aa',10) 
and    (b.name,b.val) = ('bb',20) 
and    (c.name,c.val) = ('cc',30) 
and    (a.pid = b.pid and b.pid = c.pid);


-- just count
select t.pid
from test t
where (t.name, t.val) in ( ('aa', '10'), ('bb', '20'), ('cc', '30') )
group by t.pid
having count(*) = 3;                                                     


-- just sum
select t.pid
from test t
group by t.pid
having sum( ((t.name, t.val) in ( ('aa', '10'), ('bb', '20'), ('cc', '30') ))::int ) = 3                                                   


Output:
distinct and IN derived table
| pid |
| --- |
| 2   |
| 1   |


nested IN derived table
| pid |
| --- |
| 1   |
| 2   |


intersect
| pid |
| --- |
| 1   |
| 2   |


cross join using old syntax, table comma table
| pid |
| --- |
| 1   |
| 2   |

cross join
| pid |
| --- |
| 1   |
| 2   |

count
| pid |
| --- |
| 1   |
| 2   |


sum
| pid |
| --- |
| 1   |
| 2   |

All have same result, the first query's result is not ordered though.

Live test: https://dbfiddle.uk/?rdbms=postgres_11&fiddle=3311663df90fd62f1194e50699767b0e

Execution plan: https://dbfiddle.uk/?rdbms=postgres_11&fiddle=925dc71c30c9ef0f27bbc58f28b3bc33

Thursday, April 11, 2019

Not every RDBMS has every

Live test: https://www.db-fiddle.com/f/j1y7WCH2tRcWoE6QMbfRxC/6

Postgres supports the SQL Standard's every.

The query below shows the movie(s) that their genre is exactly Fantasy + Mystery. It won't show the movie if it is Fantasy + Mystery + Drama for example.

Here are the various ways every can be emulated in other RDBMSes.

select m.id, m.title, m.year
from movie m
join movie_genre mg on m.id = mg.movie_id
group by m.id
having 
    count(mg.genre) = 2 -- making sure that the movie has exactly two genres

    -- SQL Standard. Available in Postgres
    and every(mg.genre in ('Fantasy','Mystery'))
    -- Specific to Postgres
    and bool_and(mg.genre in ('Fantasy','Mystery'))

    
    -- MySQL can simulate every/bool_and by using BIT_AND. 
    -- Since MySQL's boolean behaves same as C's int, there's no need to explicitly cast the boolean expression to int. 
    -- Hence MySQL don't need to compare bit_and's result to 1 too.
    -- Following is how it is done in MySQL:
    
    -- and bit_and(mg.genre in ('Fantasy','Mystery'))

    
    
    --  Postgres has bit_and, but it's solely for bit manipulation. 
    --  No Postgres users would use bit_and for every/bool_and functionality, Postgres already has very English-like every/bool_and.
    --  If a user is intent to use bit_and instead of every/bool_and, the user has to cast the boolean expression to integer using ::int. 
    --  And also, bit_and need to be compared with a result:    

    and bit_and((mg.genre in ('Fantasy','Mystery'))::int) = 1


    -- Other RDBMSes
    and count(case when mg.genre in ('Fantasy','Mystery') then mg.genre end) = count(mg.genre)
    and min(case when mg.genre in ('Fantasy','Mystery') then 1 else 0 end) = 1
    and sum(case when mg.genre in ('Fantasy','Mystery') then 0 else 1 end) = 0

Output:
| id  | title                                    | year |
| --- | ---------------------------------------- | ---- |
| 125 | Harry Potter and the Prisoner of Azkaban | 2004 |


DDL:
CREATE TABLE movie (
  id INTEGER primary key,
  title VARCHAR(42),
  year INTEGER
);

INSERT INTO movie
  (id, title, year)
VALUES
  ('308', 'Coraline', '2009'),
  ('125', 'Harry Potter and the Prisoner of Azkaban', '2004'),
  ('204', 'Hugo', '2011'),
  ('42', 'Hitchiker''s guide to galaxy', '2011'),
  ('168', 'Lucky', '2011'),
  ('88', 'Nice', 1969);
  
  

CREATE TABLE movie_genre (
  movie_id INTEGER,
  genre VARCHAR(8)
);

INSERT INTO movie_genre
  (movie_id, genre)
VALUES
  ('308', 'Fantasy'),
  ('308', 'Thriller'),
  ('125', 'Fantasy'),
  ('125', 'Mystery'),
  ('204', 'Fantasy'),
  ('204', 'Mystery'),
  ('204', 'Drama'),
  ('308', 'Fantasy'),
  ('168', 'Fantasy'),
  ('88', 'Mystery'),
  ('88', 'Tour');  


Postgres-specific

Live test: https://www.db-fiddle.com/f/j1y7WCH2tRcWoE6QMbfRxC/2
select m.id, m.title, m.year
from movie m
join movie_genre mg on m.id = mg.movie_id
group by m.id
having 
    count(mg.genre) = 2 -- making sure that the movie has exactly two genres
    and every(mg.genre in ('Fantasy','Mystery'))    

Output:
| id  | title                                    | year |
| --- | ---------------------------------------- | ---- |
| 125 | Harry Potter and the Prisoner of Azkaban | 2004 |


To avoid explicit count (Postgres-specific)

Live test: https://www.db-fiddle.com/f/j1y7WCH2tRcWoE6QMbfRxC/7

select m.id, m.title, m.year
from movie m
join movie_genre mg on m.id = mg.movie_id
group by m.id
having array_agg(mg.genre order by mg.genre) = array['Fantasy', 'Mystery']

Output:
| id  | title                                    | year |
| --- | ---------------------------------------- | ---- |
| 125 | Harry Potter and the Prisoner of Azkaban | 2004 |

Note that we need to use order by mg.genre as array comparison is order-dependent. Without order by mg.genre, Harry Potter will not be shown.

Another good read on every: https://blog.jooq.org/2014/12/18/a-true-sql-gem-you-didnt-know-yet-the-every-aggregate-function/

Wednesday, March 27, 2019

For interface-free action props, use typeof

Instead of creating an interface for the actions that will be used by the component, just use typeof on actionCreators object. See line 54 and 63



A sample action creator (e.g., setColorTheme):

export const setColorTheme = (colorTheme: string): LoggedUserAction => ({
    type: LoggedUserActionType.LOGGED_USER__SET_COLOR_THEME,
    colorTheme
});

Tuesday, March 26, 2019

Without immer, with immer

Without immer, constructing immutables is error-prone:

increaseOrderQuantity(item) {
    const itemIndex = this.state.data.indexOf(item);

    this.setState(({data}) => ({
        data: [
            ...data.slice(0, itemIndex),
            {
                ...item,
                quantity: item.quantity + 1
            },
            ...data.slice(itemIndex + 1) // have made a mistake of forgetting to include + 1
        ]
    }));
}


With immer, less error-prone:

import produce from 'immer';

.
.
.


increaseOrderQuantity(item) {
    const itemIndex = this.state.data.indexOf(item);
    
    this.setState(produce(draft => {
        draft.data[itemIndex].quantity++;
    }));
}

Saturday, March 23, 2019

Don't use siloed mapDispatchToProps, use redux-thunk

This makes getLoggedUser functionality not accessible on all components but App component

export const incrementCounter = (n?: number): CounterAction => ({
    type: CounterActionType.COUNTER__INCREMENT,
    n
});

const mapDispatchToProps = (dispatch: Dispatch) => ({
    incrementCounter: (n?: number) => dispatch(incrementCounter(n)),
    getLoggedUser: async () => {
        const userRequest = await fetch('https://jsonplaceholder.typicode.com/users/1');

        const {status} = userRequest;

        if (!(status === 200 || status === 301)) {
            throw new Error('HTTP Status: ' + status);
        }

        const {username} = await userRequest.json() as IUserDto;

        await dispatch(setLoggedUser(username));

        await dispatch(setColorTheme('blue'));

    }
});

export default connect(mapStateToProps, mapDispatchToProps)(hot(App));


To make getLoggedUser functionality accessible from other components, instead of defining it in mapDispatchToProps, define getLoggedUser outside and make it return a thunk that accepts dispatch parameter. A function returned from a function is called a thunk. Include the thunk creator on action creators.


export const incrementCounter = (n?: number): CounterAction => ({
    type: CounterActionType.COUNTER__INCREMENT,
    n
});

export const getLoggedUser = () => async (dispatch: Dispatch): Promise<void> =>
{
    const userRequest = await fetch('https://jsonplaceholder.typicode.com/users/1');

    const {status} = userRequest;

    if (!(status === 200 || status === 301)) {
        throw new Error('HTTP Status: ' + status);
    }

    const {username} = await userRequest.json() as IUserDto;

    await dispatch(setLoggedUser(username));

    await dispatch(setColorTheme('blue'));

};


const actionCreators = {
    incrementCounter, 
    getLoggedUser
};

export default connect(mapStateToProps, actionCreators)(hot(App));


You'll receive the error below if you forgot to import and configure redux-thunk to your project:

Uncaught Error: Actions must be plain objects. Use custom middleware for async actions.


Here's an example configuration for redux-thunk:

import { applyMiddleware, compose, createStore, Store } from 'redux';

import { reducersRoot } from './reducers-root';

import { IAllState } from './all-state';

import ReduxThunk from 'redux-thunk';

export function configureStore(): Store<IAllState>
{
    const middlewares = applyMiddleware(ReduxThunk);

    const composeEnhancers = (window as any)['__REDUX_DEVTOOLS_EXTENSION_COMPOSE__'] || compose;

    const composed = composeEnhancers(middlewares);

    return createStore(reducersRoot(), composed);
}

Another benefit of not using mapDispatchToProps is you can just pass the action creator directly to the action creators object, no need for the code to call the dispatch by itself. Making the code simple.

const actionCreators = {
    incrementCounter, 
    getLoggedUser
};

If needed be, it can be customized how an action creator is called:

const actionCreators = {
    incrementCounter: (n?: number) => incrementCounter(n! * 42),
    getLoggedUser
};

Friday, March 22, 2019

Error: Actions must be plain objects. Use custom middleware for async actions

If you got that error, it's likely that you forgot to import the redux-thunk and configure it accordingly similar to the code below:

import { createStore, Store } from 'redux';

import { reducersRoot } from './reducers-root';

import { IAllState } from './all-state';

export function configureStore(): Store<IAllState>
{
    const devTools: any = (window as any)['__REDUX_DEVTOOLS_EXTENSION__'];

    return createStore(reducersRoot(), devTools && devTools());
}


Solution:
import { applyMiddleware, compose, createStore, Store } from 'redux';

import { reducersRoot } from './reducers-root';

import { IAllState } from './all-state';

import ReduxThunk from 'redux-thunk';

export function configureStore(): Store<IAllState>
{
    const middlewares = applyMiddleware(ReduxThunk);

    const composeEnhancers = (window as any)['__REDUX_DEVTOOLS_EXTENSION_COMPOSE__'] || compose;

    const composed = composeEnhancers(middlewares);

    return createStore(reducersRoot(), composed);
}


Wednesday, March 20, 2019

Flattening redux state while maintaining navigability

Let's say you have this interface for mapStateToProps:

// file 1

export interface ILoggedUserState
{
    username: string;
    email: string;
    lastLogged: Date;
}

// file 2

export interface IAllState
{
    loggedUser: ILoggedUserState;
    chosenTheme: IChosenTheme;
    menu: IMenu;
}

// file 3

interface IPropsToUse
{
    loggedUser: ILoggedUserState;
    chosenTheme: IChosenTheme;
}

export const mapStateToProps = ({loggedUser, chosenTheme}: IAllState): IPropsToUse => ({
    loggedUser,
    chosenTheme
});




And you want to flatten or use only a few fields from ILoggedUserState, say we only want to use the username only:

interface IPropsToUse
{
    username: string;
    chosenTheme: IChosenTheme;
}

export const mapStateToProps = ({loggedUser: {username}, chosenTheme}: IAllState): IPropsToUse => ({
    username,
    chosenTheme
});


The downside of that code is it cannot convey anymore where the IPropsToUse's username property come from.

Here's a better way to convey that IPropsToUse's username come frome ILoggedUserState's username field.

interface IPropsToUse
{
    username: ILoggedUserState['username'];
    chosenTheme: IChosenTheme;
}

export const mapStateToProps = ({loggedUser: {username}, chosenTheme}: IAllState): IPropsToUse => ({
    username,
    chosenTheme
});


The syntax looks odd, but it works. With the code above, we can convey that IPropsToUse's username come from ILoggedUserState's username, and IPropsToUse's username also receives the type of ILoggedUserState's username.