How to automatically display column totals in grids

When using the grid view for amounts, Openbravo includes a very valuable functionality. If you select all the columns you want to operate, and move your mouse to the header of the column, selected column’s total amount will be showed on your browser’s bottom left side corner.

P.D: It is important to setup your browser in order to allow status bar text change using Javascript. Many browser have this option disabled by default for security reasons. Enabling this option, you may be taking some risk of exposing yourself to malicious web sites.

In red the functionality. In blue how to setup Firefox browser

Advertisements

April 9, 2010 at 11:59 am 1 comment

A tip to stop and restart OpenbravoERP faster

It is usual when developing or on production environments to include improvements on your OpenbravoERP instance. Therefore it is needed to deploy a new war file.

Although it could sound obvious, many of us usually stop tomcat service, and erases old deployed sources on webapps. Then new war file is copied and tomcat restarted.

In order to be more efficient and reduce the time tomcat takes to stop and start, it is better to also stop/start apache services. I mean, stop httpd, stop tomcat, copy war file, start tomcat and start httpd. It looks like it should take longer, but it doesn’t. Apache takes short to stop and start, and having Apache stopped, Tomcat stops faster.

This way the interruption will be smaller. Both when developing or on production environment.

March 23, 2010 at 2:52 pm Leave a comment

OpenbravoERP server customizing experience

Implementing OpenbravoERP in mid and big size companies, sometimes means having plenty of new developments and customizations, apart from having a bigger pool of users. Therefore the production server will need more resources.

When in a production environment there is a large server with plenty of resources, OpenbravoERP needs some parametrization in order to operate exploiting all resources properly.

Some improvements based on my experience, having a 20G server:

Related to Ant tasks:
In order to avoid java heap space when many sources have been developed, increase build.maxmemory in build.xml file. By default, from OB 2.50-MP12 ahead, this parameter is set to 1024M for 64bit servers and to 512M for 32bit servers. It will also compile faster.

Related to Tomcat:
In order to assure higher efficiency and lower response times to the users, customize some tomcat parameters. In file /etc/profile.d/tomcat.sh, change -Xmx parameter based in your own criteria. Realize 64bit servers need more resources than 32bit ones. The above is just an example:

export CATALINA_OPTS=”-server -Xms128M -Xmx2560M -XX:MaxPermSize=256M -Djava.library.path=/usr/lib64″

It is important to check the existing Tomcat documentation before you change anything, and change it first on a development environment.

Related to PostgreSQL:
In order to assure better database performance, edit /srv/pgsql/8.3/postgresql.conf and change this parameters shared_buffers, checkpoint_segments, maintenance_work_mem, wal_buffers and effective_cache_size.

Take into account that you will probably have to set a new value for SHMMAX, this can be done adding kernel.shmmax = 8589934592 to /etc/sysctl.conf. Where 8589934592 is the result of doing: 1024M (defined as shared_buffers) * 1024 bits/M * 8192.

Again, it is important to check the existing PostgreSQL documentation before you change anything, and change it first on a development environment.

Finally, it is important to execute vacuums frequently in order to have proper database statistics and performance. This can also be done using the console.

Remember, this is just my own experience.

February 16, 2010 at 2:49 pm Leave a comment

Reference dataset: easy to export, easy to import

Managing Reference Dataset is a new functionality based on Modularity Project. Its goal is to be able to provide some module information, apart from the development, so the user gets the module fully installed when applies the module.

But as many other developed functionalities, Reference Dataset can be also used in many ways. For example:

    – If you are preparing your final environment and few users introduce valuable information on your testing environment, using Reference Dataset is very easy to move master data from one environment to other one.
    – I guess it could also be useful for off line synchronizations between disconnected OpenbravoERP instances or between two any applications. For example, having PDAs as routers.

What ever is your situation:

    – Create a module, and check “Has reference” check box to “Y”.
    – Set up a reference data, type Organization. Define tables and columns you want to include or exclude.
    – Export reference data, it will create a XML file.
    – Export the database.
    – Export the module.
    – Import the module on your destination environment.
    – Go to Enterprise module management (General Setup-Enterprise) and import reference data.

Of course, some improvements can be achieved to simplify this process. For example allow exporting and importing XML files without having to manage modules. It is already registered as feature request.

There is also some more documentation available.

Do you have any other situation where reference data is useful?

December 9, 2009 at 6:45 pm Leave a comment

Preferences and callouts

The Openbravo platform’s flexibility is based on many capabilities: modularity, application dictionary, auxiliary inputs, preferences, callouts, validations, etc.

Using them separately is easy and not complex, but when using more than one together requires a execution sequence knowledge in order to achieve your development design.

A couple of days ago, developing a new module I included an auxiliary input, some preferences, more than a callout and many validations on the same WAD generated window. Furthermore, all where related to just one database column.

I mean, there was a column including a validation based on a auxiliary input with a callout. And the field on top of this column was affected by a preference.

How do all these functionalities work together? What is the execution sequence?

    – Auxiliary input is included on html generation time.
    – Validation is done on execution time, when loading data for drop down list (based on a global variable o auxiliar input).
    – Preferences are loaded, so a change is performed on this field.
    – Callout is executed based on the preference value change.

As callouts might be tricky if are not properly developed, it is important to know when they are executed and when not. It took me a while to realize there was a preference interfering with my callout.

Take it into account!

December 2, 2009 at 4:10 pm 1 comment

ScrapBook

I would like to talk about a very useful tool I have just find out: ScrapBook.

ScrapBook is a Mozilla Firefox add-on to save web pages and have a look at them later offline. It is really useful when your Internet connectivity is very bad or slow.

It saves web pages, single or multiple. I mean, not just a single web page but all related links in the page you are saving. You can decide depth. You can later on highlight some text, add a sticky comment, remove some content you do not want to save from a page, etc. It also includes folder and page management for “scraped” items, importation and exportation tools, size calculation features, etc.

A really interesting for those who have limited connectivity and need to check some documentation web pages frequently.

For example, I have the Openbravo wiki (http://wiki.openbravo.com) “scraped” on my laptop. It is amazingly useful. I get the information as if I would have the wiki on my localhost. Here you have a screenshot:

Scrapbook

Do you have any similar interesting tools useful for Openbravo Community?

November 19, 2009 at 1:36 am 2 comments

Pluging and unpluging

As everybody knows, modularity has been the main improvement included on OpenbravoERP 2.50.

Modularity gives plenty of options and flexibility when developing, backing up, sharing, updating, customizing, populating, training, installing, etc.

Developing: When having more than a development on the same environment, being able to separate each artifact into modules makes developments much more structured. You can package a module and plug or unplug it.

Managing: If you want to manage and supervise your developers work, is easy to do it plug in the modules, verifying and unplug in.

Backing up: Is enough to execute ant package.module to have a backup of whatever you are developing. If you get on the wrong way, you can unplug the module and plug in your backup.

Sharing: Sending developments from one developer to other is easy. Sharing developments with de community using the central repository is very easy too.

Updating: Jumping from one maintenance pack to other is easy using modules. Plug in the .obx file is enough.

Customizing: It is possible to customize in a development environment (once customization flag is set to true), hide some fields, show other owns, change properties (read only, mandatory, drop down, length, etc.), etc. Then, you can export this parametrization and move it to production environment easily. If other developers of the project need to include some more parametrizations, can install same module, generate a new version, make changes and apply it in production environment.

Populating: Using reference data populating parametrization tables is very easy.

Training: Having needed sample data in order to build a demo or a training with not to much effort is simple using modularity.

Installing: Once you have developed all the modules by different developers, it is easy to plug in all the new modules into production environment. You can build a production environment from scratch and adapt it using already developed modules. Plug in the modules and it’s ready.

And for sure, there are many more utilizations for modularity.

Do you have some more?

October 6, 2009 at 2:09 pm Leave a comment

Older Posts


Categories

  • 1356

  • Feeds