On this post I will try to explain in detail how create a shared library for AMPL. At this moment the Extended Function Library extends AMPL with over 300 functions, all of them
available on the GNU Scientific library. Hence, before creating your own library you should check if your function is already
included in that library.
I started my custom library because I needed the sigmoid function for one project and that function was not available in the Extended Function Library.
This process has the next steps:
The headers can be downloaded from the pages below:
I started my custom library because I needed the sigmoid function for one project and that function was not available in the Extended Function Library.
This process has the next steps:
- Download headers: funcadd.h and stdio.1.h
- Visual Studio: create a new Win32 project -> DLL.
- Build DLL and testing.
The headers can be downloaded from the pages below:
- funcadd.h: download file
- stdio1.h: download file
Figure 1. Solution Explorer - Visual Studio |
As said, we will create the sigmoid function and an approximation, the C code below shows one possible implementation, surely can be improved to gain efficiency. But before getting into the code, let's explain some general concepts about solvers and sigmoid functions. In general, the solvers need the first and second order derivatives of the function, depending on the method implemented by the solver (quasi-newton, newton's method, etc.), they may require the first derivative or both, as example MINOS requires just the first order derivative since it approximates the second one. In this .dll we will provide both derivatives of the functions.
Sigmoid function: $\frac{1}{(1 +\exp(-x))}$
First order derivative: $\frac{1}{(1 +\exp(-x))} (1 - \frac{1}{(1 +\exp(-x))}) = sigmoid(x)(1-sigmoid(x))$
Second order derivative: $\frac{-(\exp(x)(\exp(x) - 1))}{(\exp(x) + 1)^3}$
A possible sigmoid function approximation ,computationally faster, is the "fast sigmoid" function. The evaluation of the exponential function has a relevant cost and in many situations the exact sigmoid function may not be needed. This post is an interesting resource about the topic with a good benchmarking between methods link.
Fast sigmoid function: $\frac{x}{(1 + |x|)}$
First order derivative: $\frac{(-\frac{x^2}{|x|} + |x| + 1)}{|x|(1 + |x|)^2}$
Second order derivative: $\frac{3x^2|x| - 3|x|^3 - 3|x|^2 + x^2}{|x|^3(|x| + 1)^3}$
#include "stdlib.h" #include "math.h" #include "funcadd.h" /* includes "stdio1.h" */ #define MAX_VAL 1.0e+300 #ifdef __cplusplus /* The #ifdef __cplusplus lines allow this to be compiled * either by an ANSI C compiler or by a C++ compiler. */ extern "C" { #endif static real sigmoid(arglist *al) { /* Sigmoid function: sigmoid(x) = 1/(1 + exp(-x)) */ real x = al->ra[0]; AmplExports *ae = al->AE; if (x > MAX_VAL || x < -MAX_VAL) { fprintf(Stderr, "x = %d exceeded the maximum allowed value.", x); return -1; } else { if (al->derivs) { //first derivative: sigmoid gradient // domain: R (all real numbers) // range: {y c R: 0 <= y <= 0.25} *al->derivs = (1.0f/(1.0f + exp(-x)))*(1.0f -1.0f/(1.0f + exp(-x))); if (al -> hes) { //second derivative (hessian) // domain: R (all real numbers) // range: {y c R: -1/(6*sqrt(3)) <= 1/(6*sqrt(3))} *al->hes = -(exp(x) * (exp(x) - 1.0f)/ (pow(exp(x) + 1.0f, 3))); } } return 1.0f/(1.0f + exp(-x)); } } static real fast_sigmoid(arglist * al) { /* fast sigmoid: sigmoid function approximation. fast_sigmoid(x) = x/(1 + abs(x)) Domain {x c R: x != 0} -> derivative discontinuity at x = 0 */ real x = al->ra[0]; if (al->derivs) { // first derivative (gradient) *al->derivs = (-pow(x,2)/fabs(x) + fabs(x) + 1) /(fabs(x) * pow(1 + fabs(x), 2)); if (al -> hes) { // second derivative (hessian) real d = 3 * pow(x,2) * fabs(x) - 3 * pow(fabs(x), 3); real d1 = 3 * pow(fabs(x),2) + pow(x,2); *al->hes = (d - d1) / (pow(fabs(x), 3) * pow(fabs(x) + 1, 3)); } } return x / (1 + fabs(x)); } void funcadd(AmplExports *ae){ /* Insert calls on addfunc here... */ /* Arg 3, called type, must satisfy 0 <= type <= 6: * type&1 == 0: 0,2,4,6 ==> force all arguments to be numeric. * type&1 == 1: 1,3,5 ==> pass both symbolic and numeric arguments. * type&6 == 0: 0,1 ==> the function is real valued. * type&6 == 2: 2,3 ==> the function is char * valued; static storage suffices: AMPL copies the return value. * type&6 == 4: 4,5 ==> the function is random (real valued). * type&6 == 6: 6 ==> random, real valued, pass nargs real args, * 0 <= nargs <= 2. * * Arg 4, called nargs, is interpretted as follows: * >= 0 ==> the function has exactly nargs arguments * <= -1 ==> the function has >= -(nargs+1) arguments. */ addfunc("sigmoid", (rfunc) sigmoid, 0, 1, 0); addfunc("f_sigmoid", (rfunc) fast_sigmoid, 0, 1, 0); } #ifdef __cplusplus } #endifWhen creating a custom functions there are two important points, write the first order derivative (*al ->derivs) and if there is second derivative (if the solver's method requires the second order derivative for computing the Hessian ; al -> hes) then (*al->hes).
A last comment regarding funcadd(); It is important to define appropiate names to call your function (first argument in addfunc()), in this case I chose "sigmoid" and "f_sigmoid", keep in mind this, because otherwise AMPL will not recognize your functions.
Now it is time to build the .dll and do some testing with AMPL. Paste your .dll into your AMPL directory:
Figure 2. AMPL directory (student-edition) |
Open ampl.exe or amplide.exe and type the instructions below to test your functions; if you followed these steps your output should be identical.
Figure 3. AMPL console |
That's all from my side, I hope you find it useful.