neuralhedge.nn package
Submodules
neuralhedge.nn.base module
neuralhedge.nn.blackschole module
Benchmark strategy for BlackScholes
- class neuralhedge.nn.blackschole.BlackScholesAlpha(mu, sigma, r, alpha=None)
Bases:
ModuleMerton problem Model
- forward(x)
- Return type:
prop (
torch.Tensor)
- Shape:
prop: (n_sample, 2)
- class neuralhedge.nn.blackschole.BlackScholesDelta(sigma, risk_free_rate, strike: float)
Bases:
ModuleDelta hedging Model
- forward(x)
- Parameters:
x (
torch.Tensor) – (log_price = x[…, 0], time_to_maturity = x[…, 1])- Return type:
bs_delta (
torch.Tensor)
- class neuralhedge.nn.blackschole.BlackScholesMeanVarianceAlpha(mu, sigma, r, Wstar)
Bases:
BlackScholesAlphaMean Variance Model
- compute_alpha(x)
- forward(x)
- Return type:
prop (
torch.Tensor)
- Shape:
prop: (n_sample, 2)
- class neuralhedge.nn.blackschole.BlackScholesMeanVarianceAlphaClip(mu, sigma, r, Wstar, clip)
Bases:
BlackScholesMeanVarianceAlphaMean Variance Clipped Model
- forward(x)
- Return type:
prop (
torch.Tensor)
- Shape:
prop: (n_sample, 2)
neuralhedge.nn.contigent module
neuralhedge.nn.datahedger module
- class neuralhedge.nn.datahedger.EfficientHedger(strategy: Module, init_wealth: Tensor, risk: LossMeasure = EntropicRiskMeasure(), ad_bound=0.0)
Bases:
HedgerEfficient Hedger to hedge with data :param strategy: :type strategy:
torch.nn.Module:param risk: :type risk:neuralhedge.nn.loss.LossMeasure:param init_wealth: :type init_wealth:torch.Tensor:param ad_bound: admissibility bound to penalize :type ad_bound:float- compute_loss(input: List[Tensor]) Tensor
Compute loss for training
- class neuralhedge.nn.datahedger.Hedger(strategy: Module, risk: LossMeasure = EntropicRiskMeasure())
Bases:
BaseModelHedger to hedge with data :param strategy: :type strategy:
torch.nn.Module:param risk: :type risk:neuralhedge.nn.loss.LossMeasure- compute_holding_stock_tplus1(all_info_t: Tensor, t=None) Tensor
Compute the holding of risky asset
- compute_info_t(info_dyn: Tensor, info: Tensor, t=None) Tensor
Compute information to input the strategy at time t
- compute_loss(input: List[Tensor]) Tensor
Compute loss for training
- compute_pnl(prices: Tensor, info: Tensor, init_wealth: Tensor, payoff: Tensor)
Compute profit and loss, after deducting payoff
- compute_wealth0_dis(prices: Tensor, info: Tensor) List[Tensor]
Compute the discounted wealth process
- forward(prices: Tensor, info: Tensor, init_wealth: Tensor) Tensor
Compute wealth process
- pricer(input) Tensor
Pricing with risk cash
neuralhedge.nn.datamanager module
- class neuralhedge.nn.datamanager.Manager(strategy: ~torch.nn.modules.module.Module, utility_func=<function log_utility>)
Bases:
HedgerHedger to portfolio management with data :param strategy: :type strategy:
torch.nn.Module:param utility_func: :type utility_func:Function- compute_info_t(info_dyn: Tensor, info: Tensor, t=None) Tensor
Compute the infomation to input to strategy
- compute_loss(input: List[Tensor]) Tensor
Compute the loss
- compute_prop_hold_tplus1(all_info_t: Tensor, t=None) Tensor
Compute the propotional holdings
- forward(prices: Tensor, info: Tensor) Tensor
Compute wealth process
- record_history()
Record the history of alpha
neuralhedge.nn.loss module
- class neuralhedge.nn.loss.EntropicRiskMeasure(a: float = 1.0)
Bases:
LossMeasure- property a
- forward(input_T: Tensor) Tensor
\(\rho(X) = (1/a) \log(\mathbb{E}[\exp(-aX)])\)
- optimal_omega(input_T: Tensor) Tensor
\(f(X) = (1/a) * \log(a\mathbb{E}[exp(-aX)])\) :param input: :type input:
torch.Tensor- Shapes:
input: (n_sample, n_timesteps, 1)
- class neuralhedge.nn.loss.ExpectedShortfall(q: float = 0.5)
Bases:
LossMeasureHere we use
- forward(input: Tensor) Tensor
\(f(X) = \mathrm{ES}_\alpha(X), \alpha= 1-q\)
- l_func(input: Tensor) Tensor
- optimal_omega(input: Tensor) Tensor
\(f(X) = -\mathrm{VaR}_q(X), \alpha= 1-q\)
- property q
- class neuralhedge.nn.loss.LossMeasure(*args, **kwargs)
Bases:
Moduleclass for loss
- class neuralhedge.nn.loss.PowerMeasure(p: float = 1.0)
Bases:
LossMeasure- forward(input: Tensor) Tensor
\(f(X) = (1/p) \mathbb{E}[\max(X,0)^{p}]\)
- property p
- class neuralhedge.nn.loss.SquareMeasure(a: float = 1.0)
Bases:
LossMeasure- property a
- forward(input: Tensor) Tensor
\(f(X) = \mathrm{Var}(X)/2 - \mathbb{E}[X]\)
- optimal_omega(input: Tensor) Tensor
\(f(X) = -\mathbb{E}[X]\)
- neuralhedge.nn.loss.admissible_cost(wealth, bound=0.0) Tensor
Penalty on admissibility
- neuralhedge.nn.loss.exp_utility(input: Tensor, a: float = 1.0) Tensor
\(f(X) = -\exp(-aX)\)
- neuralhedge.nn.loss.expected_shortfall(input: Tensor, q: float = 0.01) Tensor
\(\mathrm{ES}_{q}(X)\)
- neuralhedge.nn.loss.log_utility(x: Tensor) Tensor
\(f(X) = -\log(X)\)
- neuralhedge.nn.loss.no_cost(holding_diff, price_now) Tensor
No trading cost
- neuralhedge.nn.loss.proportional_cost(holding_diff, price_now) Tensor
Proportional trading cost
- neuralhedge.nn.loss.value_at_risk(input: Tensor, q: float = 0.01) Tensor
\(\mathrm{VaR}_{q}(X)\)
neuralhedge.nn.network module
- class neuralhedge.nn.network.NeuralNetSequential(n_output: int = 1, n_layers: int = 2, n_units: int = 128, activation: Module = ReLU())
Bases:
SequentialDense Network
- class neuralhedge.nn.network.SingleWeight
Bases:
ModuleNetwork output constant proportional strategy
- forward(x)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
neuralhedge.nn.trainer module
- class neuralhedge.nn.trainer.Trainer(model: BaseModel)
Bases:
ModuleTrainer of loss
- fit(hedger_ds: ~torch.utils.data.dataset.Dataset, EPOCHS=100, batch_size=256, optimizer=<class 'torch.optim.adam.Adam'>, lr_scheduler_gamma=1.0, lr=0.01)
Fitting with dataset
- forward(input: List[Tensor])
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.