by submitting patches, statistical tests, new models, or examples.
`statsmodels` is developed on `Github <https://github.com/statsmodels/statsmodels>`_
-using the `Git <http://git-scm.com/>`_ version control system.
+using the `Git <https://git-scm.com/>`_ version control system.
Submitting a Bug Report
~~~~~~~~~~~~~~~~~~~~~~~
So you want to submit a patch to `statsmodels` but aren't too familiar with github? Here are the steps you need to take.
1. `Fork <https://help.github.com/articles/fork-a-repo>`_ the `statsmodels repository <https://github.com/statsmodels/statsmodels>`_ on Github.
-2. `Create a new feature branch <http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging>`_. Each branch must be self-contained, with a single new feature or bugfix.
+2. `Create a new feature branch <https://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging>`_. Each branch must be self-contained, with a single new feature or bugfix.
3. Make sure the test suite passes. This includes testing on Python 3. The easiest way to do this is to either enable `Travis-CI <https://travis-ci.org/>`_ on your fork, or to make a pull request and check there.
4. Document your changes by editing the appropriate file in ``docs/source/``. If it is a big, new feature add a note and an example to the latest ``docs/source/release/versionX.X.rst`` file. See older versions for examples. If it's a minor change, it will be included automatically in our relase notes.
5. Add an example. If it is a big, new feature please submit an example notebook by following `these instructions <https://www.statsmodels.org/devel/dev/examples.html>`_.
Mailing List
~~~~~~~~~~~~
-Conversations about development take place on the `statsmodels mailing list <http://groups.google.com/group/pystatsmodels?hl=en>`__.
+Conversations about development take place on the `statsmodels mailing list <https://groups.google.com/group/pystatsmodels?hl=en>`__.
Learn More
~~~~~~~~~~
~~~~~~~
Statsmodels is released under the
-`Modified (3-clause) BSD license <http://www.opensource.org/licenses/BSD-3-Clause>`_.
+`Modified (3-clause) BSD license <https://www.opensource.org/licenses/BSD-3-Clause>`_.
cython >= 0.24
- http://cython.org/
+ https://cython.org/
Cython is required if you are building the source from github. However,
if you have are building from source distribution archive then the
X-12-ARIMA or X-13ARIMA-SEATS
- http://www.census.gov/srd/www/x13as/
+ https://www.census.gov/srd/www/x13as/
If available, time-series analysis can be conducted using either
X-12-ARIMA or the newer X-13ARIMA-SEATS. You should place the
matplotlib >= 1.5
- http://matplotlib.org/
+ https://matplotlib.org/
Matplotlib is needed for plotting functionality and running many of the
examples.
Download and extract the source distribution from PyPI or github
- http://pypi.python.org/pypi/statsmodels
+ https://pypi.python.org/pypi/statsmodels
https://github.com/statsmodels/statsmodels/tags
Or clone the bleeding edge code from our repository on github at
Discussions take place on our mailing list.
-http://groups.google.com/group/pystatsmodels
+https://groups.google.com/group/pystatsmodels
We are very interested in feedback about usability and suggestions for
improvements.
* Dual licensed under the MIT and GPL licenses.
* This basically means you can use this code however you want for
* free, but don't claim to have written it yourself!
- * Donations always accepted: http://www.JavascriptToolbox.com/donate/
+ * Donations always accepted: https://www.JavascriptToolbox.com/donate/
*
* Please do not link to the .js files on javascripttoolbox.com from
* your site. Copy the files locally to your server instead.
Version Control and Git
-----------------------
-We use the `Git <http://git-scm.com/>`_ version control system for development.
+We use the `Git <https://git-scm.com/>`_ version control system for development.
Git allows many people to work together on the same project. In a nutshell, it
allows you to make changes to the code independent of others who may also be
working on the code and allows you to easily contribute your changes to the
+ `Git documentation (book and videos) <https://git-scm.com/documentation>`_
+ `Github help pages <https://help.github.com/>`_
+ `NumPy documentation <https://docs.scipy.org/doc/numpy/dev/index.html>`_
-+ `Matthew Brett's Pydagogue <http://matthew-brett.github.io/pydagogue/>`_
++ `Matthew Brett's Pydagogue <https://matthew-brett.github.io/pydagogue/>`_
Below, we describe the bare minimum git commands you need to contribute to
`statsmodels`.
be such a concern. One great place to start learning about rebase is
:ref:`rebasing without tears <pydagogue:actual-rebase>`. In particular, `heed
the warnings
-<http://matthew-brett.github.io/pydagogue/rebase_without_tears.html#safety>`__.
+<https://matthew-brett.github.io/pydagogue/rebase_without_tears.html#safety>`__.
Namely, **always make a new branch before doing a rebase**. This is good
general advice for working with git. I would also add **never use rebase on
work that has already been published**. If another developer is using your
Introduction to pytest
----------------------
-Like many packages, statsmodels uses the `pytest testing system <https://docs.pytest.org/en/latest/contents.html>`__ and the convenient extensions in `numpy.testing <http://docs.scipy.org/doc/numpy/reference/routines.testing.html>`__. Pytest will find any file, directory, function, or class name that starts with ``test`` or ``Test`` (classes only). Test function should start with ``test``, test classes should start with ``Test``. These functions and classes should be placed in files with names beginning with ``test`` in a directory called ``tests``.
+Like many packages, statsmodels uses the `pytest testing system <https://docs.pytest.org/en/latest/contents.html>`__ and the convenient extensions in `numpy.testing <https://docs.scipy.org/doc/numpy/reference/routines.testing.html>`__. Pytest will find any file, directory, function, or class name that starts with ``test`` or ``Test`` (classes only). Test function should start with ``test``, test classes should start with ``Test``. These functions and classes should be placed in files with names beginning with ``test`` in a directory called ``tests``.
.. _run-tests:
- resid_studentized_internal
- ess_press
- hat_matrix_diag
- - cooks_distance - Cook's Distance `Wikipedia <http://en.wikipedia.org/wiki/Cook%27s_distance>`_ (with some other links)
+ - cooks_distance - Cook's Distance `Wikipedia <https://en.wikipedia.org/wiki/Cook%27s_distance>`_ (with some other links)
- cov_ratio
- dfbetas
- dffits
Since version 0.5.0, ``statsmodels`` allows users to fit statistical
models using R-style formulas. Internally, ``statsmodels`` uses the
-`patsy <http://patsy.readthedocs.io/en/latest/>`_ package to convert formulas and
+`patsy <https://patsy.readthedocs.io/en/latest/>`_ package to convert formulas and
data to the matrices that are used in model fitting. The formula
framework is quite powerful; this tutorial only scratches the surface. A
full description of the formula language can be found in the ``patsy``
docs:
-- `Patsy formula language description <http://patsy.readthedocs.io/en/latest/>`_
+- `Patsy formula language description <https://patsy.readthedocs.io/en/latest/>`_
Loading modules and functions
-----------------------------
accept ``formula`` and ``df`` arguments, whereas upper case ones take
``endog`` and ``exog`` design matrices. ``formula`` accepts a string
which describes the model in terms of a ``patsy`` formula. ``df`` takes
-a `pandas <http://pandas.pydata.org/>`_ data frame.
+a `pandas <https://pandas.pydata.org/>`_ data frame.
``dir(smf)`` will print a list of available models.
Namespaces
----------
-Notice that all of the above examples use the calling namespace to look for the functions to apply. The namespace used can be controlled via the ``eval_env`` keyword. For example, you may want to give a custom namespace using the :class:`patsy:patsy.EvalEnvironment` or you may want to use a "clean" namespace, which we provide by passing ``eval_func=-1``. The default is to use the caller's namespace. This can have (un)expected consequences, if, for example, someone has a variable names ``C`` in the user namespace or in their data structure passed to ``patsy``, and ``C`` is used in the formula to handle a categorical variable. See the `Patsy API Reference <http://patsy.readthedocs.io/en/latest/API-reference.html>`_ for more information.
+Notice that all of the above examples use the calling namespace to look for the functions to apply. The namespace used can be controlled via the ``eval_env`` keyword. For example, you may want to give a custom namespace using the :class:`patsy:patsy.EvalEnvironment` or you may want to use a "clean" namespace, which we provide by passing ``eval_func=-1``. The default is to use the caller's namespace. This can have (un)expected consequences, if, for example, someone has a variable names ``C`` in the user namespace or in their data structure passed to ``patsy``, and ``C`` is used in the formula to handle a categorical variable. See the `Patsy API Reference <https://patsy.readthedocs.io/en/latest/API-reference.html>`_ for more information.
Using formulas with models that do not (yet) support them
---------------------------------------------------------
import pandas
from patsy import dmatrices
-`pandas <http://pandas.pydata.org/>`_ builds on ``numpy`` arrays to provide
+`pandas <https://pandas.pydata.org/>`_ builds on ``numpy`` arrays to provide
rich data structures and data analysis tools. The ``pandas.DataFrame`` function
provides labelled arrays of (potentially heterogenous) data, similar to the
``R`` "data.frame". The ``pandas.read_csv`` function can be used to convert a
*Literacy* and *Wealth* variables, and 4 region binary variables.
The ``patsy`` module provides a convenient function to prepare design matrices
-using ``R``-like formulas. You can find more information `here <http://patsy.readthedocs.io/en/latest/>`_.
+using ``R``-like formulas. You can find more information `here <https://patsy.readthedocs.io/en/latest/>`_.
We use ``patsy``'s ``dmatrices`` function to create design matrices:
* returned ``pandas`` DataFrames instead of simple numpy arrays. This is useful because DataFrames allow ``statsmodels`` to carry-over meta-data (e.g. variable names) when reporting results.
The above behavior can of course be altered. See the `patsy doc pages
-<http://patsy.readthedocs.io/en/latest/>`_.
+<https://patsy.readthedocs.io/en/latest/>`_.
Model fit and summary
---------------------
----------------------
Internally, this function uses
-`pandas.isnull <http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data>`_.
+`pandas.isnull <https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data>`_.
Anything that returns True from this function will be treated as missing data.
The current minimum dependencies are:
* `Python <https://www.python.org>`__ >= 2.7, including Python 3.4+
-* `NumPy <http://www.scipy.org/>`__ >= 1.11
-* `SciPy <http://www.scipy.org/>`__ >= 0.18
-* `Pandas <http://pandas.pydata.org/>`__ >= 0.19
+* `NumPy <https://www.scipy.org/>`__ >= 1.11
+* `SciPy <https://www.scipy.org/>`__ >= 0.18
+* `Pandas <https://pandas.pydata.org/>`__ >= 0.19
* `Patsy <https://patsy.readthedocs.io/en/latest/>`__ >= 0.4.0
-* `Cython <http://cython.org/>`__ >= 0.24 is required to build the code from
+* `Cython <https://cython.org/>`__ >= 0.24 is required to build the code from
github but not from a source distribution.
Given the long release cycle, Statsmodels follows a loose time-based policy for
Optional Dependencies
---------------------
-* `Matplotlib <http://matplotlib.org/>`__ >= 1.5 is needed for plotting
+* `Matplotlib <https://matplotlib.org/>`__ >= 1.5 is needed for plotting
functions and running many of the examples.
-* If installed, `X-12-ARIMA <http://www.census.gov/srd/www/x13as/>`__ or
- `X-13ARIMA-SEATS <http://www.census.gov/srd/www/x13as/>`__ can be used
+* If installed, `X-12-ARIMA <https://www.census.gov/srd/www/x13as/>`__ or
+ `X-13ARIMA-SEATS <https://www.census.gov/srd/www/x13as/>`__ can be used
for time-series analysis.
* `pytest <https://docs.pytest.org/en/latest/>`__ is required to run
the test suite.
-* `IPython <http://ipython.org>`__ >= 3.0 is required to build the
+* `IPython <https://ipython.org>`__ >= 3.0 is required to build the
docs locally or to use the notebooks.
* `joblib <http://pythonhosted.org/joblib/>`__ >= 0.9 can be used to accelerate distributed
estimation for certain models.
l1-penalized Discrete Choice Models
-----------------------------------
-A new optimization method has been added to the discrete models, which includes Logit, Probit, MNLogit and Poisson, that makes it possible to estimate the models with an l1, linear, penalization. This shrinks parameters towards zero and can set parameters that are not very different from zero to zero. This is especially useful if there are a large number of explanatory variables and a large associated number of parameters. `CVXOPT <http://cvxopt.org/>`_ is now an optional dependency that can be used for fitting these models.
+A new optimization method has been added to the discrete models, which includes Logit, Probit, MNLogit and Poisson, that makes it possible to estimate the models with an l1, linear, penalization. This shrinks parameters towards zero and can set parameters that are not very different from zero to zero. This is especially useful if there are a large number of explanatory variables and a large associated number of parameters. `CVXOPT <https://cvxopt.org/>`_ is now an optional dependency that can be used for fitting these models.
New and Improved Graphics
-------------------------
Most moving window statistics, like rolling mean, moments (up to 4th order), min,
max, mean, and variance, are covered by the functions for `Moving (rolling)
-statistics/moments <http://pandas.pydata.org/pandas-docs/stable/computation.html#moving-rolling-statistics-moments>`_ in Pandas.
+statistics/moments <https://pandas.pydata.org/pandas-docs/stable/computation.html#moving-rolling-statistics-moments>`_ in Pandas.
.. module:: statsmodels.sandbox.tsa
:synopsis: Experimental time-series analysis models
where :math:`A_i` is a :math:`K \times K` coefficient matrix.
We follow in large part the methods and notation of `Lutkepohl (2005)
-<http://www.springer.com/gb/book/9783540401728>`__,
+<https://www.springer.com/gb/book/9783540401728>`__,
which we will not develop here.
Model fitting
<p><a href="https://twitter.com/statsmodels">Twitter</a>
-<a href="http://scipystats.blogspot.com/"><img src="{{ pathto('_static/blogger_sm.png', 1) }}" alt="Blog"></a></p>
+<a href="https://scipystats.blogspot.com/"><img src="{{ pathto('_static/blogger_sm.png', 1) }}" alt="Blog"></a></p>
"cell_type": "markdown",
"metadata": {},
"source": [
- "All of the lower case models accept ``formula`` and ``data`` arguments, whereas upper case ones take ``endog`` and ``exog`` design matrices. ``formula`` accepts a string which describes the model in terms of a ``patsy`` formula. ``data`` takes a [pandas](http://pandas.pydata.org/) data frame or any other data structure that defines a ``__getitem__`` for variable names like a structured array or a dictionary of variables. \n",
+ "All of the lower case models accept ``formula`` and ``data`` arguments, whereas upper case ones take ``endog`` and ``exog`` design matrices. ``formula`` accepts a string which describes the model in terms of a ``patsy`` formula. ``data`` takes a [pandas](https://pandas.pydata.org/) data frame or any other data structure that defines a ``__getitem__`` for variable names like a structured array or a dictionary of variables. \n",
"\n",
"``dir(sm.formula)`` will print a list of available models. \n",
"\n",
"\n",
"The [Medpar](https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/doc/COUNT/medpar.html)\n",
"dataset is hosted in CSV format at the [Rdatasets repository](https://raw.githubusercontent.com/vincentarelbundock/Rdatasets). We use the ``read_csv``\n",
- "function from the [Pandas library](http://pandas.pydata.org) to load the data\n",
+ "function from the [Pandas library](https://pandas.pydata.org) to load the data\n",
"in memory. We then print the first few columns: \n"
]
},
"\n",
"The time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.\n",
"\n",
- "All data series considered here are taken from [Federal Reserve Economic Data (FRED)](https://research.stlouisfed.org/fred2/). Conveniently, the Python library [Pandas](http://pandas.pydata.org/) has the ability to download data from FRED directly."
+ "All data series considered here are taken from [Federal Reserve Economic Data (FRED)](https://research.stlouisfed.org/fred2/). Conveniently, the Python library [Pandas](https://pandas.pydata.org/) has the ability to download data from FRED directly."
]
},
{
# All of the lower case models accept ``formula`` and ``data`` arguments,
# whereas upper case ones take ``endog`` and ``exog`` design matrices.
# ``formula`` accepts a string which describes the model in terms of a
-# ``patsy`` formula. ``data`` takes a [pandas](http://pandas.pydata.org/)
+# ``patsy`` formula. ``data`` takes a [pandas](https://pandas.pydata.org/)
# data frame or any other data structure that defines a ``__getitem__`` for
# variable names like a structured array or a dictionary of variables.
#
# dataset is hosted in CSV format at the [Rdatasets repository](https://ra
# w.githubusercontent.com/vincentarelbundock/Rdatasets). We use the
# ``read_csv``
-# function from the [Pandas library](http://pandas.pydata.org) to load the
+# function from the [Pandas library](https://pandas.pydata.org) to load the
# data
# in memory. We then print the first few columns:
#
specification of distributions, Revue de l'Institut Internat.
de Statistique. 5: 307 (1938), reprinted in
R.A. Fisher, Contributions to Mathematical Statistics. Wiley, 1950.
- .. [*] http://en.wikipedia.org/wiki/Edgeworth_series
+ .. [*] https://en.wikipedia.org/wiki/Edgeworth_series
.. [*] S. Blinnikov and R. Moessner, Expansions for nearly Gaussian
distributions, Astron. Astrophys. Suppl. Ser. 130, 193 (1998)
res = res_ols #alias
- #http://en.wikipedia.org/wiki/PRESS_statistic
+ #https://en.wikipedia.org/wiki/PRESS_statistic
#predicted residuals, leave one out predicted residuals
resid_press = res.resid / (1-hh)
ess_press = np.dot(resid_press, resid_press)
sigma2_est = np.sqrt(res.mse_resid) #can be replace by different estimators of sigma
sigma_est = np.sqrt(sigma2_est)
resid_studentized = res.resid / sigma_est / np.sqrt(1 - hh)
- #http://en.wikipedia.org/wiki/DFFITS:
+ #https://en.wikipedia.org/wiki/DFFITS:
dffits = resid_studentized * np.sqrt(hh / (1 - hh))
nobs, k_vars = res.model.exog.shape
res_ols.df_modelwc = res_ols.df_model + 1
n_params = res.model.exog.shape[1]
- #http://en.wikipedia.org/wiki/Cook%27s_distance
+ #https://en.wikipedia.org/wiki/Cook%27s_distance
cooks_d = res.resid**2 / sigma2_est / res_ols.df_modelwc * hh / (1 - hh)**2
#or
#Eubank p.93, 94
handles, labels = ax.get_legend_handles_labels()
# Proxy artist for fill_between legend entry
- # See http://matplotlib.org/1.3.1/users/legend_guide.html
+ # See https://matplotlib.org/1.3.1/users/legend_guide.html
plt = _import_mpl()
for label, fill_between in zip(['50% HDR', '90% HDR'], fill_betweens):
p = plt.Rectangle((0, 0), 1, 1,
License: BSD-3
TODO: update script to use sharex, sharey, and visible=False
- see http://www.scipy.org/Cookbook/Matplotlib/Multiple_Subplots_with_One_Axis_Label
+ see https://www.scipy.org/Cookbook/Matplotlib/Multiple_Subplots_with_One_Axis_Label
for sharex I need to have the ax of the last_row when editing the earlier
rows. Or you axes_grid1, imagegrid
http://matplotlib.sourceforge.net/mpl_toolkits/axes_grid/users/overview.html
Trends in Econometrics: Vol 3: No 1, pp1-88.
http://dx.doi.org/10.1561/0800000009
-http://en.wikipedia.org/wiki/Kernel_%28statistics%29
+https://en.wikipedia.org/wiki/Kernel_%28statistics%29
Silverman, B.W. Density Estimation for Statistics and Data Analysis.
"""
Notes
-----
- See http://en.wikipedia.org/wiki/Cumulative_distribution_function
+ See https://en.wikipedia.org/wiki/Cumulative_distribution_function
For more details on the estimation see Ref. [5] in module docstring.
The multivariate CDF for mixed data (continuous and ordered/unordered
References
----------
- .. [1] http://en.wikipedia.org/wiki/Conditional_probability_distribution
+ .. [1] https://en.wikipedia.org/wiki/Conditional_probability_distribution
Examples
--------
See, for example:
- http://en.wikipedia.org/wiki/Autoregressive_moving_average_model
+ https://en.wikipedia.org/wiki/Autoregressive_moving_average_model
Parameters
----------
# Only add CI to legend for the first plot
if i == 0:
# Proxy artist for fill_between legend entry
- # See http://matplotlib.org/1.3.1/users/legend_guide.html
+ # See https://matplotlib.org/1.3.1/users/legend_guide.html
p = plt.Rectangle((0, 0), 1, 1,
fc=ci_poly.get_facecolor()[0])
References
----------
- http://en.wikipedia.org/wiki/Edgeworth_series
+ https://en.wikipedia.org/wiki/Edgeworth_series
Johnson N.L., S. Kotz, N. Balakrishnan: Continuous Univariate
Distributions, Volume 1, 2nd ed., p.30
"""
References
----------
- http://en.wikipedia.org/wiki/Edgeworth_series
+ https://en.wikipedia.org/wiki/Edgeworth_series
Johnson N.L., S. Kotz, N. Balakrishnan: Continuous Univariate
Distributions, Volume 1, 2nd ed., p.30
"""
Notes:
Compound Poisson has mass point at zero
-http://en.wikipedia.org/wiki/Compound_Poisson_distribution
+https://en.wikipedia.org/wiki/Compound_Poisson_distribution
and would need special treatment
need a distribution that has discrete mass points and contiuous range, e.g.
References
----------
for method of moment estimator for known loc and scale
- http://en.wikipedia.org/wiki/Beta_distribution#Parameter_estimation
+ https://en.wikipedia.org/wiki/Beta_distribution#Parameter_estimation
http://www.itl.nist.gov/div898/handbook/eda/section3/eda366h.htm
NIST reference also includes reference to MLE in
Johnson, Kotz, and Balakrishan, Volume II, pages 221-235
References
----------
MLE :
- http://en.wikipedia.org/wiki/Poisson_distribution#Maximum_likelihood
+ https://en.wikipedia.org/wiki/Poisson_distribution#Maximum_likelihood
'''
#todo: separate out this part to be used for other compact support distributions
* automatic selection or proposal of smoothing parameters
Note: this is different from kernel smoothing regression,
- see for example http://en.wikipedia.org/wiki/Kernel_smoother
+ see for example https://en.wikipedia.org/wiki/Kernel_smoother
In this version of the kernel ridge regression, the training points
are fitted exactly.
References
----------
- http://en.wikipedia.org/wiki/Breusch%E2%80%93Pagan_test
+ https://en.wikipedia.org/wiki/Breusch%E2%80%93Pagan_test
Greene 5th edition
Breusch, Pagan article
References
----------
- http://en.wikipedia.org/wiki/Cochran_test
+ https://en.wikipedia.org/wiki/Cochran_test
SAS Manual for NPAR TESTS
'''
References
----------
- http://en.wikipedia.org/wiki/Vector_Autoregression
- http://en.wikipedia.org/wiki/General_matrix_notation_of_a_VAR(p)
+ https://en.wikipedia.org/wiki/Vector_Autoregression
+ https://en.wikipedia.org/wiki/General_matrix_notation_of_a_VAR(p)
'''
p = B.shape[0]
T = x.shape[0]
References
----------
- http://en.wikipedia.org/wiki/Cochran_test
+ https://en.wikipedia.org/wiki/Cochran_test
SAS Manual for NPAR TESTS
"""
References
----------
Wikipedia: kappa's initially based on these two pages
- http://en.wikipedia.org/wiki/Fleiss%27_kappa
- http://en.wikipedia.org/wiki/Cohen's_kappa
+ https://en.wikipedia.org/wiki/Fleiss%27_kappa
+ https://en.wikipedia.org/wiki/Cohen's_kappa
SAS-Manual : formulas for cohens_kappa, especially variances
see also R package irr
References
----------
- Wikipedia http://en.wikipedia.org/wiki/Fleiss%27_kappa
+ Wikipedia https://en.wikipedia.org/wiki/Fleiss%27_kappa
Fleiss, Joseph L. 1971. "Measuring Nominal Scale Agreement among Many
Raters." Psychological Bulletin 76 (5): 378-82.
"""convert non-central moments to cumulants
recursive formula produces as many cumulants as moments
- http://en.wikipedia.org/wiki/Cumulant#Cumulants_and_moments
+ https://en.wikipedia.org/wiki/Cumulant#Cumulants_and_moments
"""
X = _convert_to_multidim(mnc)
References
----------
- http://en.wikipedia.org/wiki/Ramsey_RESET_test
+ https://en.wikipedia.org/wiki/Ramsey_RESET_test
"""
order = degree + 1
References
----------
- http://en.wikipedia.org/wiki/Variance_inflation_factor
+ https://en.wikipedia.org/wiki/Variance_inflation_factor
"""
k_vars = exog.shape[1]
References
----------
- `Wikipedia <http://en.wikipedia.org/wiki/DFFITS>`_
+ `Wikipedia <https://en.wikipedia.org/wiki/DFFITS>`_
"""
# TODO: do I want to use different sigma estimate in
References
----------
- http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
+ https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
Brown, Lawrence D.; Cai, T. Tony; DasGupta, Anirban (2001). "Interval
Estimation for a Binomial Proportion",
References
----------
- http://en.wikipedia.org/wiki/Akaike_information_criterion
+ https://en.wikipedia.org/wiki/Akaike_information_criterion
"""
return -2. * llf + 2. * df_modelwc
References
----------
- http://en.wikipedia.org/wiki/Akaike_information_criterion#AICc
+ https://en.wikipedia.org/wiki/Akaike_information_criterion#AICc
"""
return -2. * llf + 2. * df_modelwc * nobs / (nobs - df_modelwc - 1.)
References
----------
- http://en.wikipedia.org/wiki/Bayesian_information_criterion
+ https://en.wikipedia.org/wiki/Bayesian_information_criterion
"""
return -2. * llf + np.log(nobs) * df_modelwc
References
----------
- http://en.wikipedia.org/wiki/Akaike_information_criterion
+ https://en.wikipedia.org/wiki/Akaike_information_criterion
"""
if not islog:
References
----------
- http://en.wikipedia.org/wiki/Akaike_information_criterion#AICc
+ https://en.wikipedia.org/wiki/Akaike_information_criterion#AICc
"""
if not islog:
References
----------
- http://en.wikipedia.org/wiki/Bayesian_information_criterion
+ https://en.wikipedia.org/wiki/Bayesian_information_criterion
"""
if not islog:
# also does it hold only at the minimum, what's relationship to covariance
# of Jacobian matrix
# http://projects.scipy.org/scipy/ticket/1157
-# http://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm
+# https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm
# objective: sum((y-f(beta,x)**2), Jacobian = d f/d beta
# and not d objective/d beta as in MLE Greene
# similar: http://crsouza.blogspot.com/2009/11/neural-network-learning-by-levenberg_18.html#hessian
#
# in example: if J = d x*beta / d beta then J'J == X'X
-# similar to http://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm
+# similar to https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm
from __future__ import print_function
from statsmodels.compat.python import range
import numpy as np
ci_label = '$%.3g \\%%$ confidence interval' % ((1 - alpha) * 100)
# Proxy artist for fill_between legend entry
- # See e.g. http://matplotlib.org/1.3.1/users/legend_guide.html
+ # See e.g. https://matplotlib.org/1.3.1/users/legend_guide.html
p = plt.Rectangle((0, 0), 1, 1, fc=ci_poly.get_facecolor()[0])
# Legend
References
----------
- http://en.wikipedia.org/wiki/Granger_causality
+ https://en.wikipedia.org/wiki/Granger_causality
Greene: Econometric Analysis
"""
Maintainer: Skipper Seabold <jsseabold@gmail.com>
Description: Writes R matrices, vectors, and scalars to a file as numpy arrays
License: BSD
-URL: http://www.github.com/statsmodels/statsmodels
+URL: https://www.github.com/statsmodels/statsmodels
Repository: github
Collate:
'R2nparray-package.R'
#text.color : black
-### LaTeX customizations. See http://www.scipy.org/Wiki/Cookbook/Matplotlib/UsingTex
+### LaTeX customizations. See https://www.scipy.org/Wiki/Cookbook/Matplotlib/UsingTex
#text.usetex : False # use latex for all text handling. The following fonts
# are supported through the usual rc parameter settings:
# new century schoolbook, bookman, times, palatino,
# of the axis range is smaller than the
# first or larger than the second
#axes.unicode_minus : True # use unicode for the minus symbol
- # rather than hypen. See http://en.wikipedia.org/wiki/Plus_sign#Plus_sign
+ # rather than hypen. See https://en.wikipedia.org/wiki/Plus_sign#Plus_sign
axes.color_cycle : 348ABD, 7A68A6, A60628, 467821, CF4457, 188487, E24A33
# E24A33 : orange
# 7A68A6 : purple
# other platforms:
# $HOME/.matplotlib/matplotlibrc
#
-# See http://matplotlib.org/users/customizing.html#the-matplotlibrc-file for
+# See https://matplotlib.org/users/customizing.html#the-matplotlibrc-file for
# more details on the paths which are checked for the configuration file.
#
# This file is best viewed in a editor which supports python mode
### LINES
-# See http://matplotlib.org/api/artist_api.html#module-matplotlib.lines for more
+# See https://matplotlib.org/api/artist_api.html#module-matplotlib.lines for more
# information on line properties.
#lines.linewidth : 1.0 # line width in points
#lines.linestyle : - # solid line
### PATCHES
# Patches are graphical objects that fill 2D space, like polygons or
# circles. See
-# http://matplotlib.org/api/artist_api.html#module-matplotlib.patches
+# https://matplotlib.org/api/artist_api.html#module-matplotlib.patches
# information on patch properties
#patch.linewidth : 1.0 # edge width in points
#patch.facecolor : blue
### FONT
#
# font properties used by text.Text. See
-# http://matplotlib.org/api/font_manager_api.html for more
+# https://matplotlib.org/api/font_manager_api.html for more
# information on font properties. The 6 font properties used for font
# matching are given below with their default values.
#
### TEXT
# text properties used by text.Text. See
-# http://matplotlib.org/api/artist_api.html#module-matplotlib.text for more
+# https://matplotlib.org/api/artist_api.html#module-matplotlib.text for more
# information on text properties
#text.color : black
### AXES
# default face and edge color, default tick sizes,
# default fontsizes for ticklabels, and so on. See
-# http://matplotlib.org/api/axes_api.html#module-matplotlib.axes
+# https://matplotlib.org/api/axes_api.html#module-matplotlib.axes
#axes.hold : True # whether to clear the axes by default on
#axes.facecolor : white # axes background color
#axes.edgecolor : black # axes edge color
#axes.unicode_minus : True # use unicode for the minus symbol
# rather than hyphen. See
- # http://en.wikipedia.org/wiki/Plus_and_minus_signs#Character_codes
+ # https://en.wikipedia.org/wiki/Plus_and_minus_signs#Character_codes
#axes.prop_cycle : cycler('color', 'bgrcmyk')
# color cycle for plot lines
# as list of string colorspecs:
#axes3d.grid : True # display grid on 3d axes
### TICKS
-# see http://matplotlib.org/api/axis_api.html#matplotlib.axis.Tick
+# see https://matplotlib.org/api/axis_api.html#matplotlib.axis.Tick
#xtick.major.size : 4 # major tick size in points
#xtick.minor.size : 2 # minor tick size in points
#xtick.major.width : 0.5 # major tick width in points
#legend.scatterpoints : 3 # number of scatter points
### FIGURE
-# See http://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure
+# See https://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure
#figure.titlesize : medium # size of the figure title
#figure.titleweight : normal # weight of the figure title
#figure.figsize : 8, 6 # figure size in inches
$body
- <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
<script type="text/javascript">
init_mathjax = function() {
if (window.MathJax) {