Why use requirejs




















This does not work well in the browser. This example should illustrate the basic problem for the browser. Suppose we have an Employee object and we want a Manager object to derive from the Employee object. Taking this example , we might code it up like this using our script loading API:. As the comment indicates above, if require is async, this code will not work. However, loading scripts synchronously in the browser kills performance.

So, what to do? If XHR is used, then we can massage the text above -- we can do a regexp to find require calls, make sure we load those scripts, then use eval or script elements that have their body text set to the text of the script loaded via XHR. XHR also has issues with cross-domain requests. More moving parts and more things to get wrong. In particular, you need to be sure to not send any non-standard HTTP headers or there may be another "preflight" request done to make sure the cross-domain access is allowed.

Dojo has used an XHR-based loader with eval and, while it works, it has been a source of frustration for developers. There are many edge cases and moving parts that create a tax on the developer. Am I wrong? Add a comment. Active Oldest Votes. Here is what it says on the RequireJS website : Once you are finished doing development and want to deploy your code for your end users, you can use the optimizer to combine the JavaScript files together and minify it.

Matt Greer Matt Greer I know this is an old thread, but I stumbled across because I was having similar doubts. So, a followup question, if you don't mind - if we finally bundle everything into one file for deployment then my whole application loads in one shot instead of piecemeal on demand.

Isn't this contradictory to AMD? But AMD is more theoretical, where as requirejs is also concerned with real world performance issues. Loading each module individually is definitely cleaner and more pure, but will take forever : — Matt Greer.

I also put this question to SO see stackoverflow. Perhaps a mid point between the two approaches? To answer your question directly, yes, it is one or the other. BishopZ BishopZ 5, 8 8 gold badges 39 39 silver badges 57 57 bronze badges. While I understand the benefits of keeping the files separately, I think that bundling also reduces the number of http connections required. As pipelining in browsers and servers becomes more widely available, this may matter less, but it's currently a pretty big deal.

The http connection overhead can mount quickly if the app is large. For our app, when we run it in unpacked mode and load each JS file individually, it takes roughly seconds just to load the page. In packed mode it's about one second.

Thousands of strongly-typed classes, and one. Harry Harry Sign up or log in Sign up using Google. Sign up using Facebook. Especially, if the actual page is to be rendered via JS, resulting effectively in a second load cycle for the assets inlined by the generated code. Using some partials loaded via XHR would result in a third cycle and so on. Your statements are a bit flawed. Usually this isn't an issue when in production, as require.

I have a labs site, which is a SPA and has a webgl demo link. I don't include Threejs in my minified js, as I only want to load it when someone clicks on my webgl demo link. On the other hand, using a completely other load-cycle-model in testing than in production isn't the best either. Who is bothering about this anymore at all? And using tools like require. Fact is, we're all using computers that are beasts, only the NSA would have access to just some 20 years ago, and are using the best optimized software that ever existed JS engines.

And using this, there are just too many projects, where you can watch pages rendering like it were the mid s. A little off topic, but an interesting project I've been using, is jam. Been loving it for managing JS libraries and packaging them to be used. Uses the Require framework, but adds some functionality to it.

Figured some people on here may find it interesting. Most medium-sized web apps which is what most of us are building, right?! But it is at least as difficult to maintain RequireJS config files and the require and define statements. I have seen devs spending hours pairing to figure out their RequireJS config files.

Furthermore, many web apps will simply not benefit from lazy loading, because they will require loading almost everything at a certain choke point -- such as immediately after user login. If you can determine a clear choke point like this, it should be straight forward to configure the script bundling accordingly. Or maybe all of your custom scripts minify to 50K -- in which case you are insane to try to optimize performance by breaking that apart.

Now, of course there will be apps where lazy loading will be beneficial. But I think they will be a small minority, based on the medium-sized business apps I have worked on. Bottom line is that good devs will actually figure out whether or not it makes sense to use RequireJS on a project -- because it is not free.

People who spout that RequireJS should be used as a rule are either sheeple or self-congratulating hipster "experts. Great discussion. I am very interested in understanding the benefits of requirejs. Still after reading this discussion I cannot understand the benefits for serious web developers. For example someone says "With RequireJS The order does not matter, if objects does not depend from each other. But if objects will depend object3 extends object2 that extends object1 from each other then I should load them in order, so I can just do:.

And that is fine, because so I know the entire flow of my app If I am the developer I should know how my app works, right?!! Exists the dependiencies of the libraries. RequireJS is a way to have the code organized and it avoids to use libraries without previous dependency. The promise of automatic dependency handling sounds like music in your ears but you will encounter so many problems when actually using it.

It simply is the worst opensource project in existence and i would rather use internet explorer 5 for the rest of my life than requirejs. Do yourself a favor and don't touch it. It adds so many levels of complexity to your project that might already be complex enough. RequireJS is a prime example of the hell that front-end development has become, it is now harder to develop a JavaScript application than building a thousand file C project.

I've worked with many languages and for the past few years JavaScript and its environment have been the constant outlier when it comes to nonsensical libraries and tools that get in the way and make simple work a nightmare for the developer who does not know all the ins and out of the library currently in favour, a library that is often poorly documented and not developer friendly.

We should heavily encourage developers to stay away from such libraries as they promote coding practice that are nefarious, not adapted to JavaScript, encourages laziness and poor knowledge of your own code base. Joined a medium size team on a medium sized project using requirejs. Total nightmare. It's like all my prior experience of web programming 15 years on the front end has been tossed aside. Oh sure when it works it just works Plus modern web frameworks like django let you compartmentalize functionality into templates.

And while I'll admit that this project likely isn't using requirejs properly I also haven't seen anything that makes me even remotely believe it's worth using. Personally I would never in a million years introduce requirejs into a project. There are so many other traditional way's to handle these issues that it just doesn't make sense. Software engineers really should know how code works and not try to abstract away every last detail.

This thread made me think about my current practice and helped me decide to continue not using require. I don't think require. To me it seems more like a habit. You can use require. Doesn't seem to cause much issues either way. If, and only if, your client side application actually consists of more than several dozen individual js files then automated dependency management will certainly have its benefits. On one project we currently have around 40 modules some of them global, some only used on specific pages and some of the meta modules are only supposed to be loaded i.

For that purpose I wrote our own dependency manager which has less than 50 lines of js. In order to get all the dependencies to the client we use grunt to uglify all the modules into one monolithic file. For development the files are simply concatenated without uglifying. Even if we wanted to include the source files individually we could do that relatively simply by extracting the module list into a json file and including them via our server side template engine.

Now this is only my setup for a bigger project. Obviously there are even larger projects out there, but considering how easy it was to set up our own dependency management, which is tailored to our particular needs, I doubt that an even larger project couldn't handle that. However I have a lot of much smaller client side projects where I only need anywhere between 3 and maybe 20 js files.

I assume that most projects on the web are of this scale, even though they don't like to admit this. For projects like that I hardly think your dependency graph will reach any significant complexity. Just create a single global object as your application's namespace I usually give it a project-specific name to avoid collision with generic names like app.

Wrap all your modules into an IIFE and inject the modules into your namespace. Looks like this:. A few years have passed since this was started, so there are far more options now.

However I wanted to share how RequireJS is still an excellent tool for medium'ish maybe large to some people sized projects. Was the initial config a little tricky? Yes but after that it's been excellent.

For an idea of the project complexity front and back: the project has north of JS files there are the usual inline scripts as well that aren't counted and north of 3, class files src. Using RequireJS in the way we've implemented it has allowed us to progressively modularize the application, which was only realistic option. To say it's flexible would be an accurate statement as it's currently supporting 2 modes of operation in production - dist and src which we can toggle for debugging.

We run r. We're also in the early stages of a service based approach for our client-side assets that are tied to RequireJS i. Network throttling tests over the HTTP and HTTPS schemes has gone smoothly scaled back to dial up speeds and we've added transpiling into the mix so we can stay current maps provided via Babel 6.

In summary, RequireJS is the exit point for most of the platform JS in a hybrid mincat'd scripts in dist and modular src distribution, and it's doing what it was designed to do flawlessly. I totally agree with what philer stated, I've use the same setup in several projects with 20 to 30 js dependencies and haven't run into any issues.

Not to say that RequireJs isn't a great library. It stems from the lack of recognition that on the frontend JS is a second class citizen - the author of the question is right to ask the 'why' and should stick to what is simpler to them. There is of course positive though, it and other projects like it have served their purpose. They highlight that there is a desire not to be confused with need for JS module loading on the front end and browser community has listened; I invite the reader to google 'ECMAScript 6 modules'.

Yes I'm agree with questioner. RequireJS has a plugin, text. It will automatically be loaded if the text! See the text. The domReady module implements a cross-browser method to determine when the DOM is ready. Download the module and use it in your project like so:. If this is a problem either increase the waitSeconds configuration, or just use domReady as a module and call domReady inside the require callback. Once your web app gets to a certain size and popularity, localizing the strings in the interface and providing other locale-specific information becomes more useful.

However, it can be cumbersome to work out a scheme that scales well for supporting multiple locales. RequireJS allows you to set up a basic module that has localized information without forcing you to provide all locale-specific information up front. It is automatically loaded when a module or dependency specifies the i18n! Download the plugin and put it in the same directory as your app's main JS file. To define a bundle, put it in a directory called "nls" -- the i18n! The "nls" marker in the name tells the i18n plugin where to expect the locale directories they should be immediate children of the nls directory.

If you wanted to provide a bundle of color names in your "my" set of modules, create the directory structure like so:. An object literal with a property of "root" defines this module. That is all you have to do to set the stage for later localization work. RequireJS will use the browser's navigator. If you prefer to set the locale, you can use the module config to pass the locale to the plugin:. Note that RequireJS will always use a lowercase version of the locale, to avoid case issues, so all of the directories and files on disk for i18n bundles should use lowercase locales.

For instance, if the locale is "en-us", then the "root" bundle will be used. If the locale is "fr-fr-paris" then the "fr-fr" bundle will be used. RequireJS also combines bundles together, so for instance, if the french bundle was defined like so omitting a value for red :. Then the value for red in "root" will be used. This works for all locale pieces. If all the bundles listed below were defined, then RequireJS will use the values in the following priority order the one at the top takes the most precedence :.

If you prefer to not include the root bundle in the top level module, you can define it like a normal locale bundle. In that case, the top level module would look like:.

Contains an URL protocol, like "http:" or "https:". When that happens, require. Supported configuration options: baseUrl : the root path to use for all module lookups. Example: requirejs. If not, then you may need to set a paths config for them: requirejs. For "modules" that are just jQuery or Backbone plugins that do not need to export any module value, the shim config can just be an array of dependencies: requirejs.

Setting shim by itself does not trigger code to load. Only use other "shim" modules as dependencies for shimmed scripts, or AMD libraries that have no dependencies and call define after they also create a global like jQuery or lodash. Otherwise, if you use an AMD module as a dependency for a shim config module, after a build, that AMD module may not be evaluated until after the shimmed code in the build executes, and an error will occur.

The ultimate fix is to upgrade all the shimmed code to have optional AMD define calls. This changes the scope of shimmed dependencies, so it is not guaranteed to always work, but, for example, for shimmed dependencies that depend on an AMD version of Backbone, it can be helpful.

The init function will not be called for AMD modules. For example, you cannot use a shim init function to call jQuery's noConflict. See Mapping Modules to use noConflict for an alternate approach to jQuery. Depending on the module being shimmed, it may fail in Node because Node does not have the same global environment as browsers.

If you wish to suppress that message, you can pass requirejs. Important optimizer notes for "shim" config : You should use the mainConfigFile build option to specify the file where to find the shim config. Otherwise the optimizer will not know of the shim config. The other option is to duplicate the shim config in the build profile. Do not mix CDN loading with shim config in a build.

Example scenario: you load jQuery from the CDN but use the shim config to load something like the stock version of Backbone that depends on jQuery. When you do the build, be sure to inline jQuery in the built file and do not load it from the CDN.

Otherwise, Backbone will be inlined in the built file and it will execute before the CDN-loaded jQuery will load. This is because the shim config just delays loading of the files until dependencies are loaded, but does not do any auto-wrapping of define. After a build, the dependencies are already inlined, the shim config cannot delay execution of the non-define 'd code until later. So the lesson: shim config is a stop-gap measure for non-modular code, legacy code.

For local, multi-file builds, the above CDN advice also applies. For any shimmed script, its dependencies must be loaded before the shimmed script executes. If you are using uglifyjs to minify the code, do not set the uglify option toplevel to true, or if using the command line do not pass -mt.

That option mangles the global names that shim uses to find exports. The default value is "main", so only specify it if it differs from the default. The value is relative to the package folder. Exception to the rule: if you are using the r. Only one version of a package can be used in a project context at a time.

You can use RequireJS multiversion support to load two different module contexts, but if you want to use Package A and B in one context and they depend on different versions of Package C, then that will be a problem. This may change in the future. If the "store" package did not follow the "main. There is no way to know if loading a script generates a , worse, it triggers the onreadystatechange with a complete state even in a case. So script.



0コメント

  • 1000 / 1000