Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Best Practices With React and Redux Web Application Development, Part 1

DZone's Guide to

Best Practices With React and Redux Web Application Development, Part 1

A web developer who had to switch his company's internally facing apps from Angular to React gives some advice on working with this framework/library.

· Web Dev Zone
Free Resource

Get deep insight into Node.js applications with real-time metrics, CPU profiling, and heap snapshots with N|Solid from NodeSource. Learn more.

Introduction

In the past year, our team has re-written one of our internal apps from Angular to React. While earlier React experience on the team ranged from new to experienced, we learned a lot along this journey. Much of what we learned has been from experiencing pain points in development, or inefficiencies, and either researching others' best practices or experimenting with what works best for us.

Use Typescript

One of the best decisions we ever made in our project was to use Typescript, even more broadly to use some form of typed JavaScript. We had to decide between Typescript and Flow, and for no reasons against Flow, we decided that Typescript would work better for our development workflow. Using Typescript has been a boon to our development and given us a higher degree of confidence while working as a team on the codebase. Refactoring a large codebase with 3-4 layers deep of calls from many various parts of the app can be nerve-wracking. With Typescript, as long as you have typed your functions, the uncertainty is virtually gone. That isn't to say you can't write incorrect or incomplete Typescript code that can still lead to errors, but as long as you adhere to proper typing, the occurrence of certain classes of errors, like passing the wrong set of arguments, virtually disappears.

If you are uncertain with Typescript, or you want to eliminate a large category of risk in your application, just use Typescript.

On this note as well, we use https://typestyle.github.io/#/ with which we've been very pleased.

Avoid large-scale apps that don't adhere to either strict code styling and standards and/or don't leverage some sort of JavaScript type checker like Flow or Typescript. Other sub-languages like Scala.js is among many others that would help here.

Instead, be aware that as a JavaScript project grows without typing, the more difficult refactoring will become. The larger the project the higher the risk when refactoring. Type checking doesn't always eliminate this risk, but greatly reduces it.

Use Error Tracking

Another invaluable decision the team made was to use Sentry. While I'm sure there are other great error tracking products out there, Sentry was the first we used and has served us incredibly well. Sentry gives sight to the blind. And boy were we blind in production environments early on. Initially, we relied on QA or users to report errors in the product, and users will always expose errors that are not tested by QA. This is where Sentry comes in. With proper release tagging and user tagging, you can zero in on exact releases and exact users and actually be proactive in identifying bugs and errors. There are numerous bugs we were able to fix even before going to prod. We discovered them in Sentry in QA due to some unexpected data issue or some other situation we had not accounted for.

Avoid running in production without the ability to automatically capture errors.

Instead, use Sentry or some other error reporting tool.

Optimize Your Build Process

Spend some time optimizing your build. What if your local dev build takes 20 seconds? What if you have 10 developers on your project and you re-compile 5 times an hour, so 40 times a day, therefore ~800 seconds a day are spent waiting? Accounting for workdays and an average 4 weeks off per year that puts it at ~50hrs per developer per year, or 500 hours per team. Not insignificant when you start looking for low hanging fruit to reduce build times to reduce context switches and waiting.

We have rebuilds < 2-5 seconds through Webpack DLL and other optimizations dev side. We also do code splitting and hot module reloading so only the modules that were changed are reloaded. We even have a paired down version of our build so that when working on certain parts of the app we are only, even initially, compiling that part. You can use many tricks with webpack.

AirBnB wrote an excellent synopsis of how they optimized their build in the following issue, which includes many of the optimizations we've made and then some.

Avoid using a generic webpack build and not pursuing more in-depth optimizations.

Instead, try to tailor your webpack build to your specific web app. For example, if you are using TypeScript you would want to use awesome-typescript-loader, if not, you may want to use a happy hack.

Use Modern JavaScript Constructs but Know Their Consequences

For example, using async/await is a great way to write very clean asynchronous code, but remember that if you are awaiting a Promise.all and any part of the promise fails, the entire call will fail. Build your redux actions with this in mind otherwise, a small failure in an API can cause major portions of your app not to load.

Another very nice construct is the object spread operator, but remember it will break object equality and thus circumvent the natural usage of PureComponent.

Avoid using ES6/ES7 constructs when their usage impedes the performance of your web app. For example, do you really need that anonymous inner function in your onClick? If you aren't passing any extra arguments, then odds are you don't.

Instead, know the consequences of various constructs and use them wisely.

Do You Really Need Babel?

After one of our initial rewrites from plain old JavaScript to TypeScript, we still had Babel in our pipeline. There was a point we asked each other, "Wait, why do we still have Babel in the mix?" Babel is an invaluable library that accomplishes what it intends most excellently, but we are using TypeScript, which is also transpiling the code for us. We didn't need Babel. Removing it simplified our build process and reduced one bit of complexity and could only result in a net speedup of our build.

Avoid using libraries and loaders you don't need. When is the last time you audited your package.json or your webpack config to see what libraries or loaders you may have that aren't being used?

Instead, periodically review your build toolchain and the libraries you are loading, you may just find some you can cull.

Be Aware of Deprecated Libraries

While there is always a risk in upgrading dependencies, that risk can be mitigated through functional tests, TypeScript, and the build process; the risk of not upgrading can sometimes be greater. Take, for example, React 16, which has breaking changes: in later versions of React 15, warnings would be given that certain dependencies had not yet conformed to the new PropTypes standard and would break in the next release. That warning looks like:

Warning: Accessing PropTypes via the main React package is deprecated. Use the prop-types package from npm instead.

Therefore, if you never upgraded the dependent libraries, which resolved these issues, there would be no option to upgrade to React 16.

Managing dependent libraries is a bit of a double-edged sword. When you lock your dependencies down, you reduce risk, but you also open up risk to missing out on future fixes and future potential optimizations. Some library dependencies may not play by the rules well and the project owners may not backport critical fixes to older versions.

The other edge of reducing risk by locking versions down is upgrading library versions too frequently.

What we've found best is to have a balance between locking down and upgrading. There is a sweet spot in the middle there where you let major releases stabilize, then in some hardening phase of your app, take time to upgrade dependencies.

Avoid locking down your dependencies and never updating. Also, avoid updating every single major release as soon as it comes out.

Instead, find a cadence for checking dependency releases, evaluate what makes sense for upgrading, and schedule those during some hardening phase of your app.

Know the Limitations of Your Stack

For example, we use react-actions and react-redux which has a flaw in that the action argument types aren't type checked between the actions and reducers. We've experienced several issues with this so far when we were updating an action, but forgot to update the reducer's arguments and had a mismatch, which the type checker didn't catch. One way we've gotten around this is to create a single interface containing all of the arguments and use that. That way if you use the same interface and update that shared interface, you'll be properly type checked.

Avoid this:

interface IActionProductName { productName: string; }
interface IActionProductVersion { productVersion string; }

const requestUpdateProductVersion = createAction(types.REQUEST_UPDATE_PRODUCT_VERSION,
    (productName: string, productVersion: string) => ({productName, productVersion}),
    null
);
const receiveUpdateProductVersion = createAction(types.RECEIVE_UPDATE_PRODUCT_VERSION,
    (productName: string, productVersion: string) => ({productName, productVersion}),
    isXhrError
);

[types.RECEIVE_UPDATE_PRODUCT_VERSION]: (state: ICaseDetailsState, action: ActionMeta): ICaseDetailsState => {
    // ...
});

While this approach is simpler, cleaner, and more compact in larger apps, it suffers from lack of type checking with the AND'd interfaces between the action and reducer. Technically, there is still no true type checking between the action and reducer, but lack of a common single interface for the arguments opens up the risk of errors when refactoring.

interface IActionUpdateProductNameVersion { 
    productName: string; 
    productVersion: string;
}

const requestUpdateProductVersion = createAction(types.REQUEST_UPDATE_PRODUCT_VERSION,
    (productName: string, productVersion: string) => ({productName, productVersion}),
    null
);
const receiveUpdateProductVersion = createAction(types.RECEIVE_UPDATE_PRODUCT_VERSION,
    (productName: string, productVersion: string) => ({productName, productVersion}),
    isXhrError
);

[types.RECEIVE_UPDATE_PRODUCT_VERSION]: (state: ICaseDetailsState, action: ActionMeta): ICaseDetailsState => {
    // ...
});

By using the common interfaces.IActionUpdateProductNameVersion any changes to that interface will be picked up by both action and reducer.

Profile Your Application in the Browser

React won't tell you when it's having a performance problem, and it may actually be hard to determine without looking at the JavaScript profiling data.

I would categorize many React/JavaScript performance issues into three categories.

The first is: did the component update when it shouldn't have? And the follow up to that: is updating the component more costly than just straight out rendering it? Answering the first part is straightforward, answering the second, not so much. But to tackle the first part, you can use something like https://github.com/MalucoMarinero/react-wastage-monitor, which is straightforward. It outputs to the console when a component updated but its properties were strictly equal. For that specific purpose, it works well. We ended up doing optimization with this library then disabled it as excluding node_modules didn't work perfectly, and it doesn't work perfectly depending on property functions and such. It's a great tool to use for what it is intended to do.

The second category of optimizations for JavaScript will happen through profiling. Are there areas of the code that are taking longer than you expect? Are there memory leaks? Google has an excellent reference on this here and here

The third category is eliminating unnecessary calls and updates. This is different than the first optimization, which deals with checking if a component should update. This optimization deals with even making the call, to begin with. For example, it is easy, without the necessary checks, to accidentally trigger multiple backend calls in the same component.

Avoid simply doing this:

componentWillReceiveProps(nextProps: IProps) {
    if (this.props.id !== nextProps.id) {
        this.props.dispatch(fetchFromBackend(id));
    }
}

export function fetchFromBackend(id: string) {
    return async (dispatch, getState: () => IStateReduced) => {
        // ...
    }
}

Instead, do this:

componentWillReceiveProps(nextProps: IProps) {
    if (this.props.id !== nextProps.id && !nextProps.isFetchingFromBackend) {
        this.props.dispatch(fetchFromBackend(id));
    }
}

And to be safe add another check in the action

export function fetchFromBackend(id: string) {
    return async (dispatch, getState: () => IStateReduced) => {
        if (getState().isFetchingFromBackend) return;
        ...
    }
}

This is somewhat of a contrived example, but the logic remains. The issue here is if your component's componentWillReceiveProps is triggered, yet if there is no check whether the backend call should be made, to begin with, then it will be made without condition.

The issue is even more complicated when dealing with many different clicks and changing arguments. What if you are displaying a customer order and the component needs to re-render with the new order, yet before that request even completed, the user clicked yet another order. The completion of those async calls is not always determinate. Furthermore, what if the first async call finished after the second due to some unknown backend delay, then you could end up with the user seeing a different order. The above code example doesn't even address this specific situation, but it would prevent multiple calls from happening while one is still in progress. Ultimately to resolve the proposed hypothetical situation you would need to create a keyed object in the reducer like:

objectCache: {[id: string]: object};
isFetchingCache: {[id: string]: boolean};

Where the component itself always referenced the latest id clicked and the isFetchingCache is checked with the latest id.

Note that the above is far from all-encompassing in dealing with React and JavaScript performance issues. One example demonstrating other problems is we had a performance problem when we were calling our reducers that we narrowed down to an accidental inclusion of a very deeply nested object in redux from an API response. This very large object caused performance issues when deep cloning. We discovered this by profiling the JavaScript in Chrome where the clone function rose to the top for a time, we quickly discovered what the problem was.

Tune back in for Part 2! 

Node.js application metrics sent directly to any statsd-compliant system. Get N|Solid

Topics:
web dev ,react ,redux

Published at DZone with permission of Samuel Mendenhall, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}