## lunes, 3 de abril de 2017

### Truco abierto

 Naipes españoles usados en el juego de Truco ordenados por valor. Valor decreciente de izquierda a derecha.
El truco abierto es una variación sobre el juego de Truco clásico, en la que las cartas son visibles para todos los jugadores y no se reparten.

### Preparación del juego

Los naipes muestran en la mesa abiertos (el lado de la figura hacia arriba) y se organizan por su valor para el "truco" (2da fase, ver desarrollo del juego) como se muestra en la figra de arriba (perdón por la calidad, mejoraré la figura cuando tenga un tiempito).

El juego puede jugarse con la misma cantidad de jugadores que el truco clásico (2, 4 o 6).

### Desarrollo del juego

En el truco abierto los naipes no se reparten, y los jugadores indican que cartas juegan durante el juego.

En el truco abierto siempre se juegan las dos fases siguientes:
• 1ra: Envido
• 2da: Truco

#### Envido

En la primera fase, las cartas no se utilizan y, en algún orden pre-definido, los jugadores declaran el valor de su envido. El resultado se determina como en el juego clásico.

#### Truco

Siguiendo el orden de juego, los jugadores toman la carta que desean jugar de la mesa y la ubican delante suyo. El desarrollo de esta fase es igual que en el juego clásico.
Noten que las cartas jugadas debén ser compatibles con los valores declarados durante la fase de Envido. Por supuesto uno puede mentir, poniendo en riesgo el resultado del juego.

Si lo juegan me cuentan que les parece en los comentarios.

## sábado, 3 de diciembre de 2016

### Compile WRF 3.8.1 in Ubuntu 16.04.1 LTS

WRF, the Weather Research and Forecasting Model, is a mesoscale numerical weather prediction system for meteorological applications across scales from tens of meters to thousands of kilometers.
I am following these instructions and adapting them to my system.
Install dependencies (note you need FORTRAN NetCDF)

sudo apt install csh gfortran m4 mpich libhdf5-mpich-dev libpng-dev libnetcdff-dev netcdf-bin ncl-ncarg


We now identify the path to the include filed and libraries

sudo updatedb
locate netcdf.inc
locate mpich/lib


In my system the two last commands give

/usr/include/netcdf.inc

and

/usr/lib/mpich/lib

Keep these in mind.

OK after downloading and extracting WRF I wanted to do out-of-source compilation (this keeps the source code clean form build symbols), but the Makefile is not well written so it doesn't support this.

Hence I do configure in the source folder

NETCDF=/usr WRFIO_NCD_LARGE_FILE_SUPPORT=1 LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/mpich/lib ./configure  This gives me the list of option form which I chose GNU gfortran and the default no nesting option, because I will be running things like the em_hill2d_x example. checking for perl5... no checking for perl... found /usr/bin/perl (perl) Will use NETCDF in dir: /usr HDF5 not set in environment. Will configure WRF for use without. PHDF5 not set in environment. Will configure WRF for use without. Will use 'time' to report timing information$JASPERLIB or $JASPERINC not found in environment, configuring to build without grib2 I/O... ------------------------------------------------------------------------ Please select from among the following Linux x86_64 options: 1. (serial) 2. (smpar) 3. (dmpar) 4. (dm+sm) PGI (pgf90/gcc) 5. (serial) 6. (smpar) 7. (dmpar) 8. (dm+sm) PGI (pgf90/pgcc): SGI MPT 9. (serial) 10. (smpar) 11. (dmpar) 12. (dm+sm) PGI (pgf90/gcc): PGI accelerator 13. (serial) 14. (smpar) 15. (dmpar) 16. (dm+sm) INTEL (ifort/icc) 17. (dm+sm) INTEL (ifort/icc): Xeon Phi (MIC architecture) 18. (serial) 19. (smpar) 20. (dmpar) 21. (dm+sm) INTEL (ifort/icc): Xeon (SNB with AVX mods) 22. (serial) 23. (smpar) 24. (dmpar) 25. (dm+sm) INTEL (ifort/icc): SGI MPT 26. (serial) 27. (smpar) 28. (dmpar) 29. (dm+sm) INTEL (ifort/icc): IBM POE 30. (serial) 31. (dmpar) PATHSCALE (pathf90/pathcc) 32. (serial) 33. (smpar) 34. (dmpar) 35. (dm+sm) GNU (gfortran/gcc) 36. (serial) 37. (smpar) 38. (dmpar) 39. (dm+sm) IBM (xlf90_r/cc_r) 40. (serial) 41. (smpar) 42. (dmpar) 43. (dm+sm) PGI (ftn/gcc): Cray XC CLE 44. (serial) 45. (smpar) 46. (dmpar) 47. (dm+sm) CRAY CCE (ftn/cc): Cray XE and XC 48. (serial) 49. (smpar) 50. (dmpar) 51. (dm+sm) INTEL (ftn/icc): Cray XC 52. (serial) 53. (smpar) 54. (dmpar) 55. (dm+sm) PGI (pgf90/pgcc) 56. (serial) 57. (smpar) 58. (dmpar) 59. (dm+sm) PGI (pgf90/gcc): -f90=pgf90 60. (serial) 61. (smpar) 62. (dmpar) 63. (dm+sm) PGI (pgf90/pgcc): -f90=pgf90 64. (serial) 65. (smpar) 66. (dmpar) 67. (dm+sm) INTEL (ifort/icc): HSW/BDW 68. (serial) 69. (smpar) 70. (dmpar) 71. (dm+sm) INTEL (ifort/icc): KNL MIC Enter selection [1-71] : 32 ------------------------------------------------------------------------ Compile for nesting? (0=no nesting, 1=basic, 2=preset moves, 3=vortex following) [default 0]: Configuration successful! ------------------------------------------------------------------------ testing for fseeko and fseeko64 fseeko64 is supported ------------------------------------------------------------------------ # Settings for Linux x86_64 ppc64le, gfortran compiler with gcc (serial) # ...  After this we need to fix the configure.wrf file to link against netcdf. This is done via the LIB_EXTERNAL variable:  LIB_EXTERNAL = \ -L$(WRF_SRC_ROOT_DIR)/external/io_netcdf -lwrfio_nf -L/usr/lib -lnetcdff -lnetcdf

Now I can compile, I first tried compiling WRF (using 7 of my cores for compilation)

./compile -j 7 wrf


This took about 4 minutes. Then I compiled the em_hill2d_x example (Makefile agian is not correct so you can't use multiple cores here)

./compile em_hill2d_x


This took about 10 seconds to finish. I moved to the folder where the test is stored and run it:

cd tests/em_hill2d_x
./ideal.exe
./wrf.exe


To make plots of the results I had to modify this script (here my version): the NCARG_ROOT trick mentioned in the tutorial doesn't work (I explicitly used the folder where those scripts are in my system, use locate to find them in yours) and they should use loadscript instead of load. But I got it

## lunes, 10 de octubre de 2016

### Update all your mercurial repos at once

The following is a bash command to update repositories located in subdirectories from the current directory.

for i in $(find -maxdepth 1 -type d); do cd$i ; hg pull && hg up ; cd -; done

The use of ; assures that the change directory commands will be executed even if some subdirectories are not repositories. The && makes sure that updates will be run only if necessary.

## lunes, 19 de septiembre de 2016

### Inter-, extra- and intra- polation

Interpolation: the mathematical problem of interpolation is to find a function that goes exactly through the training points. It doesn't say anything on the inputs on which this function will be later evaluated. Of course, the solution provides a way of evaluating the function in unseen inputs, but it really is not about that, it is about "going through the given points".

Extrapolation: is about the relation between the new inputs where the function will be evaluated and the inputs used for training. It doesn't say anything about the relation between the function and the training inputs, as interpolation does: you can extrapolate using an interpolating function (i.e. that goes exactly through the training inputs) or an smoothing/approximating function (i.e. that goes close to the training inputs). Extrapolation means that the new inputs are "outside the region delimited by the examples we used for training (the observation range)", for example one could say that extrapolation is evaluating a learned function outside the convex-hull of the inputs used for training (this example applies only if the input set has a notion of "inside" and "outside", which is the case in many many situations). In many dimensions it could be difficult to define what is inside the observed region, and surely there are may ways fo doing it.

Intrapolation: is a term I coined (I am sure I am not the first! do you know if anybody used it first? or maybe another clever word to express the same idea?). As in extrapolation it is about the relation the new inputs have with the inputs used for training. It doesn't say anything about the function we are using, as interpolation does. You can either intrapolate with an interpolating function or with an smoothing/approximating function. Intrapolation means that the new inputs on which the function will be evaluated are within the observation range, following the previous example, they would be inside the convex-hull of the inputs used for training.

Hence, the complement of extrapolation would be intrapolation, whether we are suing an interpolant or not. I think this makes the jargon cleaner!

Summary:
interpolation ≠ smoothing/approximation
intrapolation ≠ extrapolation

## miércoles, 19 de agosto de 2015

### Kivy - variable Label

Lately I have been working with Kivy to produce some simple Apps. One of the things I needed was a Label which text could be changed by double clicking on it. I came up with this derived class that merges Label, TextInput and Popup.

from kivy.app import App

from kivy.uix.label import Label
from kivy.uix.popup import Popup
from kivy.uix.textinput import TextInput

class variableLabel (Label):

def __init__(self, text):
super(variableLabel, self).__init__()
self.text  = text

def on_touch_down(self, touch):
if touch.is_double_tap and self.collide_point(*touch.pos):
_input = TextInput(text=self.text, multiline=False, auto_dismiss=False)
popup = Popup(title='Editing Label text', content=_input)
_input.bind (text=self.on_text)
_input.bind (on_text_validate=popup.dismiss)
popup.open()

def on_text (self, instance, value):
self.text = value

class MyApp(App):

def build(self):
return variableLabel("New")

if __name__ == '__main__':
MyApp().run()

Do you have suggestions for improvement?
Se ha producido un error en este gadget.