(PDF) How to Trade Binary Options Successfully | book P D

Everything You Always Wanted To Know About Swaps* (*But Were Afraid To Ask)

Hello, dummies
It's your old pal, Fuzzy.
As I'm sure you've all noticed, a lot of the stuff that gets posted here is - to put it delicately - fucking ridiculous. More backwards-ass shit gets posted to wallstreetbets than you'd see on a Westboro Baptist community message board. I mean, I had a look at the daily thread yesterday and..... yeesh. I know, I know. We all make like the divine Laura Dern circa 1992 on the daily and stick our hands deep into this steaming heap of shit to find the nuggets of valuable and/or hilarious information within (thanks for reading, BTW). I agree. I love it just the way it is too. That's what makes WSB great.
What I'm getting at is that a lot of the stuff that gets posted here - notwithstanding it being funny or interesting - is just... wrong. Like, fucking your cousin wrong. And to be clear, I mean the fucking your *first* cousin kinda wrong, before my Southerners in the back get all het up (simmer down, Billy Ray - I know Mabel's twice removed on your grand-sister's side). Truly, I try to let it slide. I do my bit to try and put you on the right path. Most of the time, I sleep easy no matter how badly I've seen someone explain what a bank liquidity crisis is. But out of all of those tens of thousands of misguided, autistic attempts at understanding the world of high finance, one thing gets so consistently - so *emphatically* - fucked up and misunderstood by you retards that last night I felt obligated at the end of a long work day to pull together this edition of Finance with Fuzzy just for you. It's so serious I'm not even going to make a u/pokimane gag. Have you guessed what it is yet? Here's a clue. It's in the title of the post.
That's right, friends. Today in the neighborhood we're going to talk all about hedging in financial markets - spots, swaps, collars, forwards, CDS, synthetic CDOs, all that fun shit. Don't worry; I'm going to explain what all the scary words mean and how they impact your OTM RH positions along the way.
We're going to break it down like this. (1) "What's a hedge, Fuzzy?" (2) Common Hedging Strategies and (3) All About ISDAs and Credit Default Swaps.
Before we begin. For the nerds and JV traders in the back (and anyone else who needs to hear this up front) - I am simplifying these descriptions for the purposes of this post. I am also obviously not going to try and cover every exotic form of hedge under the sun or give a detailed summation of what caused the financial crisis. If you are interested in something specific ask a question, but don't try and impress me with your Investopedia skills or technical points I didn't cover; I will just be forced to flex my years of IRL experience on you in the comments and you'll look like a big dummy.
TL;DR? Fuck you. There is no TL;DR. You've come this far already. What's a few more paragraphs? Put down the Cheetos and try to concentrate for the next 5-7 minutes. You'll learn something, and I promise I'll be gentle.
Ready? Let's get started.
1. The Tao of Risk: Hedging as a Way of Life
The simplest way to characterize what a hedge 'is' is to imagine every action having a binary outcome. One is bad, one is good. Red lines, green lines; uppie, downie. With me so far? Good. A 'hedge' is simply the employment of a strategy to mitigate the effect of your action having the wrong binary outcome. You wanted X, but you got Z! Frowny face. A hedge strategy introduces a third outcome. If you hedged against the possibility of Z happening, then you can wind up with Y instead. Not as good as X, but not as bad as Z. The technical definition I like to give my idiot juniors is as follows:
Utilization of a defensive strategy to mitigate risk, at a fraction of the cost to capital of the risk itself.
Congratulations. You just finished Hedging 101. "But Fuzzy, that's easy! I just sold a naked call against my 95% OTM put! I'm adequately hedged!". Spoiler alert: you're not (although good work on executing a collar, which I describe below). What I'm talking about here is what would be referred to as a 'perfect hedge'; a binary outcome where downside is totally mitigated by a risk management strategy. That's not how it works IRL. Pay attention; this is the tricky part.
You can't take a single position and conclude that you're adequately hedged because risks are fluid, not static. So you need to constantly adjust your position in order to maximize the value of the hedge and insure your position. You also need to consider exposure to more than one category of risk. There are micro (specific exposure) risks, and macro (trend exposure) risks, and both need to factor into the hedge calculus.
That's why, in the real world, the value of hedging depends entirely on the design of the hedging strategy itself. Here, when we say "value" of the hedge, we're not talking about cash money - we're talking about the intrinsic value of the hedge relative to the the risk profile of your underlying exposure. To achieve this, people hedge dynamically. In wallstreetbets terms, this means that as the value of your position changes, you need to change your hedges too. The idea is to efficiently and continuously distribute and rebalance risk across different states and periods, taking value from states in which the marginal cost of the hedge is low and putting it back into states where marginal cost of the hedge is high, until the shadow value of your underlying exposure is equalized across your positions. The punchline, I guess, is that one static position is a hedge in the same way that the finger paintings you make for your wife's boyfriend are art - it's technically correct, but you're only playing yourself by believing it.
Anyway. Obviously doing this as a small potatoes trader is hard but it's worth taking into account. Enough basic shit. So how does this work in markets?
2. A Hedging Taxonomy
The best place to start here is a practical question. What does a business need to hedge against? Think about the specific risk that an individual business faces. These are legion, so I'm just going to list a few of the key ones that apply to most corporates. (1) You have commodity risk for the shit you buy or the shit you use. (2) You have currency risk for the money you borrow. (3) You have rate risk on the debt you carry. (4) You have offtake risk for the shit you sell. Complicated, right? To help address the many and varied ways that shit can go wrong in a sophisticated market, smart operators like yours truly have devised a whole bundle of different instruments which can help you manage the risk. I might write about some of the more complicated ones in a later post if people are interested (CDO/CLOs, strip/stack hedges and bond swaps with option toggles come to mind) but let's stick to the basics for now.
(i) Swaps
A swap is one of the most common forms of hedge instrument, and they're used by pretty much everyone that can afford them. The language is complicated but the concept isn't, so pay attention and you'll be fine. This is the most important part of this section so it'll be the longest one.
Swaps are derivative contracts with two counterparties (before you ask, you can't trade 'em on an exchange - they're OTC instruments only). They're used to exchange one cash flow for another cash flow of equal expected value; doing this allows you to take speculative positions on certain financial prices or to alter the cash flows of existing assets or liabilities within a business. "Wait, Fuzz; slow down! What do you mean sets of cash flows?". Fear not, little autist. Ol' Fuzz has you covered.
The cash flows I'm talking about are referred to in swap-land as 'legs'. One leg is fixed - a set payment that's the same every time it gets paid - and the other is variable - it fluctuates (typically indexed off the price of the underlying risk that you are speculating on / protecting against). You set it up at the start so that they're notionally equal and the two legs net off; so at open, the swap is a zero NPV instrument. Here's where the fun starts. If the price that you based the variable leg of the swap on changes, the value of the swap will shift; the party on the wrong side of the move ponies up via the variable payment. It's a zero sum game.
I'll give you an example using the most vanilla swap around; an interest rate trade. Here's how it works. You borrow money from a bank, and they charge you a rate of interest. You lock the rate up front, because you're smart like that. But then - quelle surprise! - the rate gets better after you borrow. Now you're bagholding to the tune of, I don't know, 5 bps. Doesn't sound like much but on a billion dollar loan that's a lot of money (a classic example of the kind of 'small, deep hole' that's terrible for profits). Now, if you had a swap contract on the rate before you entered the trade, you're set; if the rate goes down, you get a payment under the swap. If it goes up, whatever payment you're making to the bank is netted off by the fact that you're borrowing at a sub-market rate. Win-win! Or, at least, Lose Less / Lose Less. That's the name of the game in hedging.
There are many different kinds of swaps, some of which are pretty exotic; but they're all different variations on the same theme. If your business has exposure to something which fluctuates in price, you trade swaps to hedge against the fluctuation. The valuation of swaps is also super interesting but I guarantee you that 99% of you won't understand it so I'm not going to try and explain it here although I encourage you to google it if you're interested.
Because they're OTC, none of them are filed publicly. Someeeeeetimes you see an ISDA (dsicussed below) but the confirms themselves (the individual swaps) are not filed. You can usually read about the hedging strategy in a 10-K, though. For what it's worth, most modern credit agreements ban speculative hedging. Top tip: This is occasionally something worth checking in credit agreements when you invest in businesses that are debt issuers - being able to do this increases the risk profile significantly and is particularly important in times of economic volatility (ctrl+f "non-speculative" in the credit agreement to be sure).
(ii) Forwards
A forward is a contract made today for the future delivery of an asset at a pre-agreed price. That's it. "But Fuzzy! That sounds just like a futures contract!". I know. Confusing, right? Just like a futures trade, forwards are generally used in commodity or forex land to protect against price fluctuations. The differences between forwards and futures are small but significant. I'm not going to go into super boring detail because I don't think many of you are commodities traders but it is still an important thing to understand even if you're just an RH jockey, so stick with me.
Just like swaps, forwards are OTC contracts - they're not publicly traded. This is distinct from futures, which are traded on exchanges (see The Ballad Of Big Dick Vick for some more color on this). In a forward, no money changes hands until the maturity date of the contract when delivery and receipt are carried out; price and quantity are locked in from day 1. As you now know having read about BDV, futures are marked to market daily, and normally people close them out with synthetic settlement using an inverse position. They're also liquid, and that makes them easier to unwind or close out in case shit goes sideways.
People use forwards when they absolutely have to get rid of the thing they made (or take delivery of the thing they need). If you're a miner, or a farmer, you use this shit to make sure that at the end of the production cycle, you can get rid of the shit you made (and you won't get fucked by someone taking cash settlement over delivery). If you're a buyer, you use them to guarantee that you'll get whatever the shit is that you'll need at a price agreed in advance. Because they're OTC, you can also exactly tailor them to the requirements of your particular circumstances.
These contracts are incredibly byzantine (and there are even crazier synthetic forwards you can see in money markets for the true degenerate fund managers). In my experience, only Texan oilfield magnates, commodities traders, and the weirdo forex crowd fuck with them. I (i) do not own a 10 gallon hat or a novelty size belt buckle (ii) do not wake up in the middle of the night freaking out about the price of pork fat and (iii) love greenbacks too much to care about other countries' monopoly money, so I don't fuck with them.
(iii) Collars
No, not the kind your wife is encouraging you to wear try out to 'spice things up' in the bedroom during quarantine. Collars are actually the hedging strategy most applicable to WSB. Collars deal with options! Hooray!
To execute a basic collar (also called a wrapper by tea-drinking Brits and people from the Antipodes), you buy an out of the money put while simultaneously writing a covered call on the same equity. The put protects your position against price drops and writing the call produces income that offsets the put premium. Doing this limits your tendies (you can only profit up to the strike price of the call) but also writes down your risk. If you screen large volume trades with a VOL/OI of more than 3 or 4x (and they're not bullshit biotech stocks), you can sometimes see these being constructed in real time as hedge funds protect themselves on their shorts.
(3) All About ISDAs, CDS and Synthetic CDOs
You may have heard about the mythical ISDA. Much like an indenture (discussed in my post on $F), it's a magic legal machine that lets you build swaps via trade confirms with a willing counterparty. They are very complicated legal documents and you need to be a true expert to fuck with them. Fortunately, I am, so I do. They're made of two parts; a Master (which is a form agreement that's always the same) and a Schedule (which amends the Master to include your specific terms). They are also the engine behind just about every major credit crunch of the last 10+ years.
First - a brief explainer. An ISDA is a not in and of itself a hedge - it's an umbrella contract that governs the terms of your swaps, which you use to construct your hedge position. You can trade commodities, forex, rates, whatever, all under the same ISDA.
Let me explain. Remember when we talked about swaps? Right. So. You can trade swaps on just about anything. In the late 90s and early 2000s, people had the smart idea of using other people's debt and or credit ratings as the variable leg of swap documentation. These are called credit default swaps. I was actually starting out at a bank during this time and, I gotta tell you, the only thing I can compare people's enthusiasm for this shit to was that moment in your early teens when you discover jerking off. Except, unlike your bathroom bound shame sessions to Mom's Sears catalogue, every single person you know felt that way too; and they're all doing it at once. It was a fiscal circlejerk of epic proportions, and the financial crisis was the inevitable bukkake finish. WSB autism is absolutely no comparison for the enthusiasm people had during this time for lighting each other's money on fire.
Here's how it works. You pick a company. Any company. Maybe even your own! And then you write a swap. In the swap, you define "Credit Event" with respect to that company's debt as the variable leg . And you write in... whatever you want. A ratings downgrade, default under the docs, failure to meet a leverage ratio or FCCR for a certain testing period... whatever. Now, this started out as a hedge position, just like we discussed above. The purest of intentions, of course. But then people realized - if bad shit happens, you make money. And banks... don't like calling in loans or forcing bankruptcies. Can you smell what the moral hazard is cooking?
Enter synthetic CDOs. CDOs are basically pools of asset backed securities that invest in debt (loans or bonds). They've been around for a minute but they got famous in the 2000s because a shitload of them containing subprime mortgage debt went belly up in 2008. This got a lot of publicity because a lot of sad looking rednecks got foreclosed on and were interviewed on CNBC. "OH!", the people cried. "Look at those big bad bankers buying up subprime loans! They caused this!". Wrong answer, America. The debt wasn't the problem. What a lot of people don't realize is that the real meat of the problem was not in regular way CDOs investing in bundles of shit mortgage debts in synthetic CDOs investing in CDS predicated on that debt. They're synthetic because they don't have a stake in the actual underlying debt; just the instruments riding on the coattails. The reason these are so popular (and remain so) is that smart structured attorneys and bankers like your faithful correspondent realized that an even more profitable and efficient way of building high yield products with limited downside was investing in instruments that profit from failure of debt and in instruments that rely on that debt and then hedging that exposure with other CDS instruments in paired trades, and on and on up the chain. The problem with doing this was that everyone wound up exposed to everybody else's books as a result, and when one went tits up, everybody did. Hence, recession, Basel III, etc. Thanks, Obama.
Heavy investment in CDS can also have a warping effect on the price of debt (something else that happened during the pre-financial crisis years and is starting to happen again now). This happens in three different ways. (1) Investors who previously were long on the debt hedge their position by selling CDS protection on the underlying, putting downward pressure on the debt price. (2) Investors who previously shorted the debt switch to buying CDS protection because the relatively illiquid debt (partic. when its a bond) trades at a discount below par compared to the CDS. The resulting reduction in short selling puts upward pressure on the bond price. (3) The delta in price and actual value of the debt tempts some investors to become NBTs (neg basis traders) who long the debt and purchase CDS protection. If traders can't take leverage, nothing happens to the price of the debt. If basis traders can take leverage (which is nearly always the case because they're holding a hedged position), they can push up or depress the debt price, goosing swap premiums etc. Anyway. Enough technical details.
I could keep going. This is a fascinating topic that is very poorly understood and explained, mainly because the people that caused it all still work on the street and use the same tactics today (it's also terribly taught at business schools because none of the teachers were actually around to see how this played out live). But it relates to the topic of today's lesson, so I thought I'd include it here.
Work depending, I'll be back next week with a covenant breakdown. Most upvoted ticker gets the post.
*EDIT 1\* In a total blowout, $PLAY won. So it's D&B time next week. Post will drop Monday at market open.
submitted by fuzzyblankeet to wallstreetbets [link] [comments]

Step-by-Step Guide for Adding a Stack, Expanding Control Lines, and Building an Assembler

After the positive response to my first tutorial on expanding the RAM, I thought I'd continue the fun by expanding the capabilities of Ben's 8-bit CPU even further. That said, you'll need to have done the work in the previous post to be able to do this. You can get a sense for what we'll do in this Imgur gallery.
In this tutorial, we'll balance software and hardware improvements to make this a pretty capable machine:

Parts List

To only update the hardware, you'll need:
If you want to update the toolchain, you'll need:
  1. Arduino Mega 2560 (Amazon) to create the programmer.
  2. Ribbon Jumper Cables (Amazon) to connect the Arduino to the breadboard.
  3. TL866 II Plus EEPROM Programmer (Amazon) to program the ROM.
Bonus Clock Improvement: One additional thing I did is replace the 74LS04 inverter in Ben's clock circuit with a 74LS14 inverting Schmitt trigger (datasheet, Jameco). The pinouts are identical! Just drop it in, wire the existing lines, and then run the clock output through it twice (since it's inverting) to get a squeaky clean clock signal. Useful if you want to go even faster with the CPU.

Step 1: Program with an Arduino and Assembler (Image 1, Image 2)

There's a certain delight in the physical programming of a computer with switches. This is how Bill Gates and Paul Allen famously programmed the Altair 8800 and started Microsoft. But at some point, the hardware becomes limited by how effectively you can input the software. After upgrading the RAM, I quickly felt constrained by how long it took to program everything.
You can continue to program the computer physically if you want and even after upgrading that option is still available, so this step is optional. There's probably many ways to approach the programming, but this way felt simple and in the spirit of the build. We'll use an Arduino Mega 2560, like the one in Ben's 6502 build, to program the RAM. We'll start with a homemade assembler then switch to something more robust.
Preparing the Physical Interface
The first thing to do is prepare the CPU to be programmed by the Arduino. We already did the hard work on this in the RAM upgrade tutorial by using the bus to write to the RAM and disconnecting the control ROM while in program mode. Now we just need to route the appropriate lines to a convenient spot on the board to plug the Arduino into.
  1. This is optional, but I rewired all the DIP switches to have ground on one side, rather than alternating sides like Ben's build. This just makes it easier to route wires.
  2. Wire the 8 address lines from the DIP switch, connecting the side opposite to ground (the one going to the chips) to a convenient point on the board. I put them on the far left, next to the address LEDs and above the write button circuit.
  3. Wire the 8 data lines from the DIP switch, connecting the side opposite to ground (the one going to the chips) directly below the address lines. Make sure they're separated by the gutter so they're not connected.
  4. Wire a line from the write button to your input area. You want to connect the side of the button that's not connected to ground (the one going to the chip).
So now you have one convenient spot with 8 address lines, 8 data lines, and a write line. If you want to get fancy, you can wire them into some kind of connector, but I found that ribbon jumper cables work nicely and keep things tidy.
The way we'll program the RAM is to enter program mode and set all the DIP switches to the high position (e.g., 11111111). Since the switches are upside-down, this means they'll all be disconnected and not driving to ground. The address and write lines will simply be floating and the data lines will be weakly pulled up by 1k resistors. Either way, the Arduino can now drive the signals going into the chips using its outputs.
Creating the Arduino Programmer
Now that we can interface with an Arduino, we need to write some software. If you follow Ben's 6502 video, you'll have all the knowledge you need to get this working. If you want some hints and code, see below (source code):
  1. Create arrays for your data and address lines. For example: const char ADDRESS_LINES[] = {39, 41, 43, 45, 47, 49, 51, 53};. Create your write line with #define RAM_WRITE 3.
  2. Create functions to enable and disable your address and data lines. You want to enable them before writing. Make sure to disable them afterward so that you can still manually program using DIP switches without disconnecting the Arduino. The code looks like this (just change INPUT to OUTPUT accordingly): for(int n = 0; n < 8; n += 1) { pinMode(ADDRESS_LINES[n], OUTPUT); }
  3. Create a function to write to an address. It'll look like void writeData(byte writeAddress, byte writeData) and basically use two loops, one for address and one for data, followed by toggling the write.
  4. Create a char array that contains your program and data. You can use #define to create opcodes like #define LDA 0x01.
  5. In your main function, loop through the program array and send it through writeData.
With this setup, you can now load multi-line programs in a fraction of a second! This can really come in handy with debugging by stress testing your CPU with software. Make sure to test your setup with existing programs you know run reliably. Now that you have your basic setup working, you can add 8 additional lines to read the bus and expand the program to let you read memory locations or even monitor the running of your CPU.
Making an Assembler
The above will serve us well but it's missing a key feature: labels. Labels are invaluable in assembly because they're so versatile. Jumps, subroutines, variables all use labels. The problem is that labels require parsing. Parsing is a fun project on the road to a compiler but not something I wanted to delve into right now--if you're interested, you can learn about Flex and Bison. Instead, I found a custom assembler that lets you define your CPU's instruction set and it'll do everything else for you. Let's get it setup:
  1. If you're on Windows, you can use the pre-built binaries. Otherwise, you'll need to install Rust and compile via cargo build.
  2. Create a file called 8bit.cpu and define your CPU instructions (source code). For example, LDA would be lda {address} -> 0x01 @ address[7:0]. What's cool is you can also now create the instruction's immediate variant instead of having to call it LDI: lda #{value} -> 0x05 @ value[7:0].
  3. You can now write assembly by adding #include "8bit.cpu" to the top of your code. There's a lot of neat features so make sure to read the documentation!
  4. Once you've written some assembly, you can generate the machine code using ./customasm yourprogram.s -f hexc -p. This prints out a char array just like our Arduino program used!
  5. Copy the char array into your Arduino program and send it to your CPU.
At this stage, you can start creating some pretty complex programs with ease. I would definitely play around with writing some larger programs. I actually found a bug in my hardware that was hidden for a while because my programs were never very complex!

Step 2: Expand the Control Lines (Image)

Before we can expand the CPU any further, we have to address the fact we're running out of control lines. An easy way to do this is to add a 3rd 28C16 ROM and be on your way. If you want something a little more involved but satisfying, read on.
Right now the control lines are one hot encoded. This means that if you have 4 lines, you can encode 4 states. But we know that a 4-bit binary number can encode 16 states. We'll use this principle via 74LS138 decoders, just like Ben used for the step counter.
Choosing the Control Line Combinations
Everything comes with trade-offs. In the case of combining control lines, it means the two control lines we choose to combine can never be activated at the same time. We can ensure this by encoding all the inputs together in the first 74LS138 and all the outputs together in a second 74LS138. We'll keep the remaining control lines directly connected.
Rewiring the Control Lines
If your build is anything like mine, the control lines are a bit of a mess. You'll need to be careful when rewiring to ensure it all comes back together correctly. Let's get to it:
  1. Place the two 74LS138 decoders on the far right side of the breadboard with the ROMs. Connect them to power and ground.
  2. You'll likely run out of inverters, so place a 74LS04 on the breadboard above your decoders. Connect it to power and ground.
  3. Carefully take your inputs (MI, RI, II, AI, BI, J) and wire them to the outputs of the left 74LS138. Do not wire anything to O0 because that's activated by 000 which won't work for us!
  4. Carefully take your outputs (RO, CO, AO, EO) and wire them to the outputs of the right 74LS138. Remember, do not wire anything to O0!
  5. Now, the 74LS138 outputs are active low, but the ROM outputs were active high. This means you need to swap the wiring on all your existing 74LS04 inverters for the LEDs and control lines to work. Make sure you track which control lines are supposed to be active high vs. active low!
  6. Wire E3 to power and E2 to ground. Connect the E1 on both 138s together, then connect it to the same line as OE on your ROMs. This will ensure that the outputs are disabled when you're in program mode. You can actually take off the 1k pull-up resistors from the previous tutorial at this stage, because the 138s actively drive the lines going to the 74LS04 inverters rather than floating like the ROMs.
At this point, you really need to ensure that the massive rewiring job was successful. Connect 3 jumper wires to A0-A2 and test all the combinations manually. Make sure the correct LED lights up and check with a multimeteoscilloscope that you're getting the right signal at each chip. Catching mistakes at this point will save you a lot of headaches! Now that everything is working, let's finish up:
  1. Connect A0-A2 of the left 74LS138 to the left ROM's A0-A2.
  2. Connect A0-A2 of the right 74LS138 to the right ROM's A0-A2.
  3. Distribute the rest of the control signals across the two ROMs.
Changing the ROM Code
This part is easy. We just need to update all of our #define with the new addresses and program the ROMs again. For clarity that we're not using one-hot encoding anymore, I recommend using hex instead of binary. So instead of #define MI 0b0000000100000000, we can use #define MI 0x0100, #define RI 0x0200, and so on.
Testing
Expanding the control lines required physically rewiring a lot of critical stuff, so small mistakes can creep up and make mysterious errors down the road. Write a program that activates each control line at least once and make sure it works properly! With your assembler and Arduino programmer, this should be trivial.
Bonus: Adding B Register Output
With the additional control lines, don't forget you can now add a BO signal easily which lets you fully use the B register.

Step 3: Add a Stack (Image 1, Image 2)

Adding a stack significantly expands the capability of the CPU. It enables subroutines, recursion, and handling interrupts (with some additional logic). We'll create our stack with an 8-bit stack pointer hard-coded from $0100 to $01FF, just like the 6502.
Wiring up the Stack Pointer
A stack pointer is conceptually similar to a program counter. It stores an address, you can read it and write to it, and it increments. The only difference between a stack pointer and a program counter is that the stack pointer must also decrement. To create our stack pointer, we'll use two 74LS193 4-bit up/down binary counters:
  1. Place a 74LS00 NAND gate, 74LS245 transceiver, and two 74LS193 counters in a row next to your output register. Wire up power and ground.
  2. Wire the the Carry output of the right 193 to the Count Up input of the left 193. Do the same for the Borrow output and Count Down input.
  3. Connect the Clear input between the two 193s and with an active high reset line. The B register has one you can use on its 74LS173s.
  4. Connect the Load input between the two 193s and to a new active low control line called SI on your 74LS138 decoder.
  5. Connect the QA-QD outputs of the lower counter to A8-A5 and the upper counter to A4-A1. Pay special attention because the output are in a weird order (BACD) and you want to make sure the lower A is connected to A8 and the upper A is connected to A4.
  6. Connect the A-D inputs of the lower counter to B8-B5 and the upper counter to B4-B1. Again, the inputs are in a weird order and on both sides of the chip so pay special attention.
  7. Connect the B1-B8 outputs of the 74LS245 transceiver to the bus.
  8. On the 74LS245 transceiver, connect DIR to power (high) and connect OE to a new active low control line called SO on your 74LS138 decoder.
  9. Add 8 LEDs and resistors to the lower part of the 74LS245 transceiver (A1-A8) so you can see what's going on with the stack pointer.
Enabling Increment & Decrement
We've now connected everything but the Count Up and Count Down inputs. The way the 74LS193 works is that if nothing is counting, both inputs are high. If you want to increment, you keep Count Down high and pulse Count Up. To decrement, you do the opposite. We'll use a 74LS00 NAND gate for this:
  1. Take the clock from the 74LS08 AND gate and make it an input into two different NAND gates on the 74LS00.
  2. Take the output from one NAND gate and wire it to the Count Up input on the lower 74LS193 counter. Take the other output and wire it to the Count Down input.
  3. Wire up a new active high control line called SP from your ROM to the NAND gate going into Count Up.
  4. Wire up a new active high control line called SM from your ROM to the NAND gate going into Count Down.
At this point, everything should be working. Your counter should be able to reset, input a value, output a value, and increment/decrement. But the issue is it'll be writing to $0000 to $00FF in the RAM! Let's fix that.
Accessing Higher Memory Addresses
We need the stack to be in a different place in memory than our regular program. The problem is, we only have an 8-bit bus, so how do we tell the RAM we want a higher address? We'll use a special control line to do this:
  1. Wire up an active high line called SA from the 28C16 ROM to A8 on the Cypress CY7C199 RAM.
  2. Add an LED and resistor so you can see when the stack is active.
That's it! Now, whenever we need the stack we can use a combination of the control line and stack pointer to access $0100 to $01FF.
Updating the Instruction Set
All that's left now is to create some instructions that utilize the stack. We'll need to settle some conventions before we begin:
If you want to add a little personal flair to your design, you can change the convention fairly easily. Let's implement push and pop (source code):
  1. Define all your new control lines, such as #define SI 0x0700 and #define SO 0x0005.
  2. Create two new instructions: PSH (1011) and POP (1100).
  3. PSH starts the same as any other for the first two steps: MI|CO and RO|II|CE. The next step is to put the contents of the stack pointer into the address register via MI|SO|SA. Recall that SA is the special control line that tells the memory to access the $01XX bank rather than $00XX.
  4. We then take the contents of AO and write it into the RAM. We can also increment the stack pointer at this stage. All of this is done via: AO|RI|SP|SA, followed by TR.
  5. POP is pretty similar. Start off with MI|CO and RO|II|CE. We then need to take a cycle and decrement the stack pointer with SM. Like with PSH, we then set the address register with MI|SO|SA.
  6. We now just need to output the RAM into our A register with RO|AI|SA and then end the instruction with TR.
  7. Updating the assembler is easy since neither instruction has operands. For example, push is just psh -> 0x0B.
And that's it! Write some programs that take advantage of your new 256 byte stack to make sure everything works as expected.

Step 4: Add Subroutine Instructions (Image)

The last step to complete our stack is to add subroutine instructions. This allows us to write complex programs and paves the way for things like interrupt handling.
Subroutines are like a blend of push/pop instructions and a jump. Basically, when you want to call a subroutine, you save your spot in the program by pushing the program counter onto the stack, then jumping to the subroutine's location in memory. When you're done with the subroutine, you simply pop the program counter value from the stack and jump back into it.
We'll follow 6502 conventions and only save and restore the program counter for subroutines. Other CPUs may choose to save more state, but it's generally left up to the programmer to ensure they're not wiping out states in their subroutines (e.g., push the A register at the start of your subroutine if you're messing with it and restore it before you leave).
Adding an Extra Opcode Line
I've started running low on opcodes at this point. Luckily, we still have two free address lines we can use. To enable 5-bit opcodes, simply wire up the 4Q output of your upper 74LS173 register to A7 of your 28C16 ROM (this assumes your opcodes are at A3-A6).
Updating the ROM Writer
At this point, you simply need to update the Arduino writer to support 32 instructions vs. the current 16. So, for example, UCODE_TEMPLATE[16][8] becomes UCODE_TEMPLATE[32][8] and you fill in the 16 new array elements with nop. The problem is that the Arduino only has so much memory and with the way Ben's code is written to support conditional jumps, it starts to get tight.
I bet the code can be re-written to handle this, but I had a TL866II Plus EEPROM programmer handy from the 6502 build and I felt it would be easier to start using that instead. Converting to a regular C program is really simple (source code):
  1. Copy all the #define, global const arrays (don't forget to expand them from 16 to 32), and void initUCode(). Add #include and #include to the top.
  2. In your traditional int main (void) C function, after initializing with initUCode(), make two arrays: char ucode_upper[2048] and char ucode_lower[2048].
  3. Take your existing loop code that loops through all addresses: for (int address = 0; address < 2048; address++).
  4. Modify instruction to be 5-bit with int instruction = (address & 0b00011111000) >> 3;.
  5. When writing, just write to the arrays like so: ucode_lower[address] = ucode[flags][instruction][step]; and ucode_upper[address] = ucode[flags][instruction][step] >> 8;.
  6. Open a new file with FILE *f = fopen("rom_upper.hex", "wb");, write to it with fwrite(ucode_upper, sizeof(char), sizeof(ucode_upper), f); and close it with fclose(f);. Repeat this with the lower ROM too.
  7. Compile your code using gcc (you can use any C compiler), like so: gcc -Wall makerom.c -o makerom.
Running your program will spit out two binary files with the full contents of each ROM. Writing the file via the TL866II Plus requires minipro and the following command: minipro -p CAT28C16A -w rom_upper.hex.
Adding Subroutine Instructions
At this point, I cleaned up my instruction set layout a bit. I made psh and pop 1000 and 1001, respectively. I then created two new instructions: jsr and rts. These allow us to jump to a subroutine and returns from a subroutine. They're relatively simple:
  1. For jsr, the first three steps are the same as psh: MI|CO, RO|II|CE, MI|SO|SA.
  2. On the next step, instead of AO we use CO to save the program counter to the stack: CO|RI|SP|SA.
  3. We then essentially read the 2nd byte to do a jump and terminate: MI|CO, RO|J.
  4. For rts, the first four steps are the same as pop: MI|CO, RO|II|CE, SM, MI|SO|SA.
  5. On the next step, instead of AI we use J to load the program counter with the contents in stack: RO|J|SA.
  6. We're not done! If we just left this as-is, we'd jump to the 2nd byte of jsr which is not an opcode, but a memory address. All hell would break loose! We need to add a CE step to increment the program counter and then terminate.
Once you update the ROM, you should have fully functioning subroutines with 5-bit opcodes. One great way to test them is to create a recursive program to calculate something--just don't go too deep or you'll end up with a stack overflow!

Conclusion

And that's it! Another successful upgrade of your 8-bit CPU. You now have a very capable machine and toolchain. At this point I would have a bunch of fun with the software aspects. In terms of hardware, there's a number of ways to go from here:
  1. Interrupts. Interrupts are just special subroutines triggered by an external line. You can make one similar to how Ben did conditional jumps. The only added complexity is the need to load/save the flags register since an interrupt can happen at any time and you don't want to destroy the state. Given this would take more than 8 steps, you'd also need to add another line for the step counter (see below).
  2. ROM expansion. At this point, address lines on the ROM are getting tight which limits any expansion possibilities. With the new approach to ROM programming, it's trivial to switch out the 28C16 for the 28C256 that Ben uses in the 6502. These give you 4 additional address lines for flags/interrupts, opcodes, and steps.
  3. LCD output. At this point, adding a 16x2 character LCD like Ben uses in the 6502 is very possible.
  4. Segment/bank register. It's essentially a 2nd memory address register that lets you access 256-byte segments/banks of RAM using bank switching. This lets you take full advantage of the 32K of RAM in the Cypress chip.
  5. Fast increment instructions. Add these to registers by replacing 74LS173s with 74LS193s, allowing you to more quickly increment without going through the ALU. This is used to speed up loops and array operations.
submitted by MironV to beneater [link] [comments]

Using Deep Learning to Predict Earnings Outcomes

Using Deep Learning to Predict Earnings Outcomes
(Note: if you were following my earlier posts, I wrote a note at the end of this post explaining why I deleted old posts and what changed)
Edit: Can't reply to comments since my account is still flagged as new :\. Thank you everyone for your comments. Edit: Made another post answering questions here.
  • Test data is untouched during training 10:1:1 train:val:test.
  • Yes, I consider it "deep" learning from what I learned at my institution. I use LSTMs at one point in my pipeline, feel free to consider that deep or not.
  • I'll be making daily posts so that people can follow along.
  • Someone mentioned RL, yes I plan on trying that in the future :). This would require a really clever way to encode the current state parameters. Haven't thought about it too much yet.
  • Someone mentioned how companies beat earnings 61% anyway, so my model must be useless right? Well if you look at the confusion matrix you can see I balanced classes before training (with some noise). This means that the data had roughly 50/50 beat/miss and had a 58% test accuracy.
TLDR:
Not financial advice.
  • I created a deep learning algorithm trained on 2015-2019 data to predict whether a company will beat earning estimates.
  • Algorithm has an accuracy of 58%.
  • I need data and suggestions.
  • I’ll be making daily posts for upcoming earnings.
Greetings everyone,
I’m Bunga, an engineering PhD student at well known university. Like many of you, I developed an interest in trading because of the coronavirus. I lost a lot of money by being greedy and uninformed about how to actually trade options. With all the free time I have with my research slowing down because of the virus, I’ve decided to use what I’m good at (being a nerd, data analytics, and machine learning) to help me make trades.
One thing that stuck out to me was how people make bets on earnings reports. As a practitioner of machine learning, we LOVE binary events since the problem can be reduced to a simple binary classification problem. With that being said, I sought out to develop a machine learning algorithm to predict whether a company will beat earnings estimates.
I strongly suggest TO NOT USE THIS AS FINANCIAL ADVICE. Please, I could just be a random guy on the internet making things up, and I could have bugs in my code. Just follow along for some fun and don’t make any trades based off of this information 😊
Things other people have tried:
A few other projects have tried to do this to some extent [1,2,3], but some are not directly predicting the outcome of the earnings report or have a very small sample size of a few companies.
The data
This has been the most challenging part of the project. I’m using data for 4,000 common stocks.
Open, high, low, close, volume stock data is often free and easy to come by. I use stock data during the quarter (Jan 1 – Mar 31 stock data for Q1 for example) in a time series classifier. I also incorporate “background” data from several ETFs to give the algorithm a feel for how the market is doing overall (hopefully this accounts for bull/bear markets when making predictions).
I use sentiment analyses extracted from 10K/10Q documents from the previous quarter as described in [4]. This gets passed to a multilayer perceptron neural network.
Data that I’ve tried and doesn’t work to well:
Scraping 10K/10Q manually for US GAAP fields like Assets, Cash, StockholdersEquity, etc. Either I’m not very good at processing the data or most of the tables are incomplete, this doesn’t work well. However, I recently came across this amazing API [5] which will ameliorate most of these problems, and I plan on incorporating this data sometime this week.
Results
After training on about 34,000 data points, the model achieves a 58% accuracy on the test data. Class 1 is beat earnings, Class 2 is miss earnings.. Scroll to the bottom for the predictions for today’s AMC estimates.

https://preview.redd.it/fqapvx2z1tv41.png?width=875&format=png&auto=webp&s=05ea5cae25ee5689edea334f2814e1fa73aa195d
Future Directions
Things I’m going to try:
  • Financial twitter sentiment data (need data for this)
  • Data on options (ToS apparently has stuff that you can use)
  • Using data closer to the earnings report itself rather than just the data within the quarterly date
A note to the dozen people who were following me before
Thank you so much for the early feedback and following. I had a bug in my code which was replicating datapoints, causing my accuracy to be way higher in reality. I’ve modified some things to make the network only output a single value, and I’ve done a lot of bug fixing.
Predictions for 4/30/20 AMC:
A value closer to 1 means that the company will be more likely to beat earnings estimates. Closer to 0 means the company will be more likely to miss earnings estimates. (People familiar with machine learning will note that neural networks don’t actually output a probability distribution so the values don’t actually represent a confidence).
  • Tkr: AAPL NN: 0.504
  • Tkr: AMZN NN: 0.544
  • Tkr: UAL NN: 0.438
  • Tkr: GILD NN: 0.532
  • Tkr: TNDM NN: 0.488
  • Tkr: X NN: 0.511
  • Tkr: AMGN NN: 0.642
  • Tkr: WDC NN: 0.540
  • Tkr: WHR NN: 0.574
  • Tkr: SYK NN: 0.557
  • Tkr: ZEN NN: 0.580
  • Tkr: MGM NN: 0.452
  • Tkr: ILMN NN: 0.575
  • Tkr: MOH NN: 0.500
  • Tkr: FND NN: 0.542
  • Tkr: TWOU NN: 0.604
  • Tkr: OSIS NN: 0.487
  • Tkr: CXO NN: 0.470
  • Tkr: BLDR NN: 0.465
  • Tkr: CASA NN: 0.568
  • Tkr: COLM NN: 0.537
  • Tkr: COG NN: 0.547
  • Tkr: SGEN NN: 0.486
  • Tkr: FMBI NN: 0.496
  • Tkr: PSA NN: 0.547
  • Tkr: BZH NN: 0.482
  • Tkr: LOCO NN: 0.575
  • Tkr: DLA NN: 0.460
  • Tkr: SSNC NN: 0.524
  • Tkr: SWN NN: 0.476
  • Tkr: RMD NN: 0.499
  • Tkr: VKTX NN: 0.437
  • Tkr: EXPO NN: 0.526
  • Tkr: BL NN: 0.516
  • Tkr: FTV NN: 0.498
  • Tkr: ASGN NN: 0.593
  • Tkr: KNSL NN: 0.538
  • Tkr: RSG NN: 0.594
  • Tkr: EBS NN: 0.483
  • Tkr: PRAH NN: 0.598
  • Tkr: RRC NN: 0.472
  • Tkr: ICBK NN: 0.514
  • Tkr: LPLA NN: 0.597
  • Tkr: WK NN: 0.630
  • Tkr: ATUS NN: 0.530
  • Tkr: FBHS NN: 0.587
  • Tkr: SWI NN: 0.521
  • Tkr: TRUP NN: 0.570
  • Tkr: AJG NN: 0.509
  • Tkr: BAND NN: 0.618
  • Tkr: DCO NN: 0.514
  • Tkr: BRKS NN: 0.490
  • Tkr: BY NN: 0.502
  • Tkr: CUZ NN: 0.477
  • Tkr: EMN NN: 0.532
  • Tkr: VICI NN: 0.310
  • Tkr: GLPI NN: 0.371
  • Tkr: MTZ NN: 0.514
  • Tkr: SEM NN: 0.405
  • Tkr: SPSC NN: 0.465
[1] https://towardsdatascience.com/forecasting-earning-surprises-with-machine-learning-68b2f2318936
[2] https://zicklin.baruch.cuny.edu/wp-content/uploads/sites/10/2019/12/Improving-Earnings-Predictions-with-Machine-Learning-Hunt-Myers-Myers.pdf
[3] https://www.euclidean.com/better-than-human-forecasts
[4] https://cran.r-project.org/web/packages/edgaedgar.pdf.
[5] https://financialmodelingprep.com/developedocs/
submitted by xXx_Bunga_xXx to wallstreetbets [link] [comments]

[Table] Asteroid Day AMA – We’re engineers and scientists working on a mission that could, one day, help save humankind from asteroid extinction. Ask us anything!

Source
There are several people answering: Paolo Martino is PM, Marco Micheli is MM, Heli Greus is HG, Detlef Koschny is DVK, and Aidan Cowley is AC.
Questions Answers
Can we really detect any asteroids in space with accuracy and do we have any real means of destroying it? Yes, we can detect new asteroids when they are still in space. Every night dozens of new asteroids are found, including a few that can come close to the Earth.
Regarding the second part of the question, the goal would be to deflect them more than destroy them, and it is technologically possible. The Hera/DART mission currently being developed by ESA and NASA will demonstrate exactly this capability.
MM
I always wanted to ask: what is worse for life on Earth - to be hit by a single coalesced asteroid chunk, or to be hit by a multiple smaller pieces of exploded asteroid, aka disrupted rubble pile scenario? DVK: This is difficult to answer. If the rubble is small (centimetres to meters) it is better to have lots of small ones – they’d create nice bright meteors. If the rubble pieces are tens of meters it doesn’t help.
Let’s say that hypothetically, an asteroid the size of Rhode Island is coming at us, it will be a direct hit - you’ve had the resources and funding you need, your plan is fully in place, everything you’ve wanted you got. The asteroid will hit in 10 years, what do you do? DVK: I had to look up how big Rhode Island is – a bit larger than the German Bundesland ‘Saarland’. Ok – this would correspond to an object about 60 km in diameter, right? That’s quite big – we would need a lot of rocket launches, this would be extremely difficult. I would pray. The good news is that we are quite convinced that we know all objects larger than just a few kilometers which come close to our planet. None of them is on a collision course, so we are safe.
the below is a reply to the above
Why are you quite convinced that you know all object of that size? And what is your approach in finding new celestial bodies? DVK: There was a scientific study done over a few years (published in Icarus 2018, search for Granvik) where they modelled how many objects there are out there. They compared this to the observations we have with the telescopic surveys. This gives us the expected numbers shown here on our infographic: https://www.esa.int/ESA_Multimedia/Images/2018/06/Asteroid_danger_explained
There are additional studies to estimate the ‘completeness’ – and we think that we know everything above roughly a few km in size.
To find new objects, we use survey telescopes that scan the night sky every night. The two major ones are Catalina and Pan-STARRS, funded by NASA. ESA is developing the so-called Flyeye telescope to add to this effort https://www.esa.int/ESA_Multimedia/Images/2017/02/Flyeye_telescope.
the below is a reply to the above
Thanks for the answer, that's really interesting! It's also funny that the fist Flyeye deployed is in Sicily, at less than 100km from me, I really had no idea DVK: Indeed, that's cool. Maybe you can go and visit it one day.
the below is a reply to the original answer
What about Interstellar objects however, like Oumuamua? DVK: The two that we have seen - 'Oumuamua and comet Borisov - were much smaller than the Saarland (or Rhode Island ;-) - not sure about Borisov, but 'Oumuamua was a few hundred meters in size. So while they could indeed come as a complete surprise, they are so rare that I wouldn't worry.
Would the public be informed if an impending asteroid event were to happen? And, how would the extinction play out? Bunch of people crushed to death, knocked off our orbit, dust clouds forever? DVK: We do not keep things secret – all our info is at the web page http://neo.ssa.esa.int. The ‘risky’ objects are in the ‘risk page’. We also put info on really close approaches there. It would also be very difficult to keep things ‘under cover’ – there are many high-quality amateur astronomers out there that would notice.
In 2029 asteroid Apophis will fly really close to Earth, even closer than geostationary satellites. Can we use some of those satellites to observe the asteroid? Is it possible to launch very cheap cube sats to flyby Apophis in 2029? DVK: Yes an Apophis mission during the flyby in 2029 would be really nice. We even had a special session on that topic at the last Planetary Defense Conference in 2019, and indeed CubeSats were mentioned. This would be a nice university project – get me a close-up of the asteroid with the Earth in the background!
the below is a reply to the above
So you’re saying it was discussed and shelved? In the conference we just presented ideas. To make them happen needs funding - in the case of ESA the support of our member countries. But having something presented at a conference is the first step. One of the results of the conference was a statement to space agencies to consider embarking on such a mission. See here: https://www.cosmos.esa.int/documents/336356/336472/PDC_2019_Summary_Report_FINAL_FINAL.pdf/341b9451-0ce8-f338-5d68-714a0aada29b?t=1569333739470
Go to the section 'resolutions'. This is now a statement that scientists can use to present to their funding agencies, demonstrating that it's not just their own idea.
Thanks for doing this AMA! Did we know the Chelyabinsk meteor in 2013 (the one which had some great videos on social media) was coming? Ig not, how comes? Also, as a little side one, have there been any fatalities from impact events in the past 20 years? Unfortunately, the Chelyabinsk object was not seen in advance, because it came from the direction of the Sun where ground-based telescopes cannot look.
No known fatalities from impacts have happened in the past 20 years, although the Chelyabinsk event did cause many injuries, fortunately mostly minor.
MM
the below is a reply to the above
How often do impacts from that direction happen, compared to impacts from visible trajectories? In terms of fraction of the sky, the area that cannot be easily scanned from the ground is roughly a circle with a radius of 40°-50° around the current position of the Sun, corresponding to ~15% of the total sky. However, there is a slight enhancement of objects coming from that direction, therefore the fraction of objects that may be missed when heading towards us is a bit higher.
However, this applies only when detecting an asteroid in its "final plunge" towards the Earth. Larger asteroids can be spotted many orbits earlier, when they are farther away and visible in the night side of the sky. Their orbits can then be determined and their possible impacts predicted even years or decades in advance.
MM
There must be a trade-off when targeting asteroids as they get closer to Earth, is there a rule of thumb at what the best time is to reach them, in terms of launch time versus time to reach the asteroid and then distance from Earth? DVK: Take e.g. a ‘kinetic impactor’ mission, like what DART and Hera are testing. Since we only change the velocity of the asteroid slightly, we need to hit the object early enough so that the object has time to move away from it’s collision course. Finding out when it is possible to launch requires simulations done by our mission analysis team. They take the strength of the launcher into account, also the available fuel for course corrections, and other things. Normally each asteroid has its own best scenario.
Do you also look at protecting the moon from asteroids? Would an impact of a large enough scale potentially have major impacts on the earth? DVK: There are programmes that monitor the Moon and look for flashes from impacting small asteroids (or meteoroids) - https://neliota.astro.noa.g or the Spanish MIDAS project. We use the data to improve our knowledge about these objects. These programmes just look at what is happening now.
For now we would not do anything if we predicted a lunar impact. I guess this will change once we have a lunar base in place.
Why aren't there an international organisation comprised of countries focused on the asteroid defence? Imagine like the organisation with multi-billion $ budget and program of action on funding new telescopes, asteroid exploration mission, plans for detection of potentially dangerous NEA, protocols on action after the detection - all international, with heads of states discussing these problems? DVK: There are international entities in place, mandated by the UN: The International Asteroid Warning Network (http://www.iawn.net) and the Space Mission Planning Advisory Group (http://www.smpag.net). These groups advise the United Nations. That is exactly where we come up with plans and protocols on action. But: They don’t have budget – that needs to come from elsewhere. I am expecting that if we have a real threat, we would get the budget. Right now, we don’t have a multi-billion budget.
the below is a reply to someone else's answer
There is no actual risk of any sizable asteroids hitting earth in the foreseeable future. Any preparation for it would just be a waste of money. DVK: Indeed, as mentioned earlier, we do not expect a large object to hit is in the near future. We are mainly worried about those in the size range of 20 m to 40 m, which happen on average every few tens of years to hundreds of years. And where we only know a percent of them or even less.
President Obama wanted to send a crewed spacecraft to an asteroid - in your opinion is this something that should still be done in the future, would there be any usefulness in having a human being walk/float on an asteroid's surface? DVK: It would definitely be cool. I would maybe even volunteer to go. Our current missions to asteroids are all robotic, the main reason is that it is much cheaper (but still expensive) to get the same science. But humans will expand further into space, I am sure. If we want to test human exploration activities, doing this at an asteroid would be easier than landing on a planet.
this is another reply Yes, but I am slightly biased by the fact that I work at the European astronaut centre ;) There exist many similarities to what we currently do for EVA (extra vehicular activities) operations on the International Space Station versus how we would 'float' around an asteroid. Slightly biased again, but using such a mission to test exploration technologies would definitely still have value. Thanks Obama! - AC
I've heard that some asteroids contains large amounts of iron. Is there a possibility that we might have "space mines" in the far away future, if our own supply if iron runs out? Yes, this is a topic in the field known as space mining, part of what we call Space Resources. In fact, learning how we can process material we might find on asteroids or other planetary bodies is increasingly important, as it opens up the opportunities for sustainable exploration and commercialization. Its a technology we need to master, and asteroids can be a great target for testing how we can create space mines :) - AC
By how much is DART expected to deflect Didymos? Do we have any indication of the largest size of an asteroid we could potentially deflect? PM: Didymos is a binary asteroid, consisting of a main asteroid Didymos A (~700m) and a smaller asteroid Didymos B (~150m) orbiting around A with a ~12 hours period. DART is expected to impact Didymos B and change its orbital period w.r.t. Didymos A of ~1%. (8 mins)
The size of Didymos B is the most representative of a potential threat to Earth (the highest combination of probability and consequence of impacts), meaning smaller asteroids hit the Earth more often but have less severe consequences, larger asteroids can have catastrophic consequences but their probability of hitting the earth is very very low.
the below is a reply to the above
Why is there less probability of larger asteroids hitting earth? DVK: There are less large objects out there. The smaller they are, the more there are.
the below is a reply to the original answer
Is there any chance that your experiment will backfire and send the asteroid towards earth? PM: Not at all, or we would not do that :) Actually Dimorphos (the Didymos "moon") will not even leave its orbit around Didymos. It will just slightly change its speed.
I'm sure you've been asked this many times but how realistic is the plot of Armageddon? How likely is it that our fate as a species will rely on (either) Bruce Willis / deep sea oil drillers? Taking into consideration that Bruce Willis is now 65 and by the time HERA is launched he will be 69, I do not think that we can rely on him this time (although I liked the movie).
HERA will investigate what method we could use to deflect asteroid and maybe the results will show that we indeed need to call the deep sea oil drillers.
HG
the below is a reply to the above
So then would it be easier to train oil drillers to become astronauts, or to train astronauts to be oil drillers? I do not know which one would be easier since I have no training/experience of deep see oil drilling nor becoming an astronaut, but as long as the ones that would go to asteroid have the sufficient skills and training (even Bruce Willis), I would be happy.
HG
If budget was no object, which asteroid would you most like to send a mission to? Nice question! For me, I'd be looking at an asteroid we know something about, since I would be interested in using it for testing how we could extract resources from it. So for me, I would choose Itokawa (https://en.wikipedia.org/wiki/25143_Itokawa), which was visited by Hayabusa spacecraft. So we already have some solid prospecting carried out for this 'roid! - AC
this is another reply Not sure if it counts as an asteroid, but Detlef and myself would probably choose ʻOumuamua, the first discovered interstellar object.
MM
the below is a reply to the above
Do we even have the capability to catch up to something like that screaming through our solar system? That thing has to have a heck of a velocity to just barrel almost straight through like that. DVK: Correct, that would be a real challenge. We are preparing for a mission called 'Comet Interceptor' that is meant to fly to an interstellar object or at least a fresh comet - but it will not catch up with it, it will only perform a short flyby.
https://www.esa.int/Science_Exploration/Space_Science/ESA_s_new_mission_to_intercept_a_comet
After proving to be able to land on one, could an asteroid serve as a viable means to transport goods and or humans throughout the solar system when the orbit of said asteroid proves beneficial. While it is probably quite problematic to land the payload, it could save fuel or am I mistaken? Neat idea! Wonder if anyone has done the maths on the amount of fuel you would need/save vs certain targets. - AC
PM: To further complement, the saving is quite marginal indeed because in order to land (softly) on the asteroid you actually need to get into the very same orbit of that asteroid . At that point your orbit remains the same whether you are on the asteroid or not..
can the current anti-ballistic missiles systems intercept a terminal phase earth strike asteroid? or it is better to know beforehand and launch an impacting vehicle into space? DVK: While I do see presentations on nuclear explosions to deflect asteroids at our professional meetings, I have not seen anybody yet studying how we could use existing missile systems. So it's hard to judge whether existing missiles would do the job. But in general, it is better to know as early as possible about a possible impact and deflect it as early as possible. This will minimize the needed effort.
How much are we prepared against asteroid impacts at this moment? DVK: 42… :-) Seriously – I am not sure how to quantify ‘preparedness’. We have international working groups in place, mentioned earlier (search for IAWN, SMPAG). We have a Planetary Defence Office at ESA, a Planetary Defense Office at NASA (who spots the difference?), search the sky for asteroids, build space missions… Still we could be doing more. More telescopes to find the object, a space-based telescope to discover those that come from the direction of the Sun. Different test missions would be useful, … So there is always more we could do.
Have you got any data on the NEO coverage? Is there estimations on the percentage of NEOs we have detected and are tracking? How can we improve the coverage? How many times have asteroids been able to enter earths atmosphere without being detected beforehand? Here’s our recently updated infographics with the fraction of undiscovered NEOs for each size range: https://www.esa.int/ESA_Multimedia/Images/2018/06/Asteroid_danger_explained
As expected, we are now nearly complete for the large ones, while many of the smaller ones are still unknown.
In order to improve coverage, we need both to continue the current approach, centered on ground-based telescopes, and probably also launch dedicated telescopes to space, to look at the fraction of the sky that cannot be easily observed from the ground (e.g., towards the Sun).
Regarding the last part of your question, small asteroids enter the Earth atmosphere very often (the infographics above gives you some numbers), while larger ones are much rarer.
In the recent past, the largest one to enter our atmosphere was about 20 meters in diameter, and it caused the Chelyabinsk event in 2013. It could not be detected in advance because it came from the direction of the Sun.
We have however detected a few small ones before impact. The first happened in 2008, when a ~4-meter asteroid was found to be on a collision course less than a day before impact, it was predicted to fall in Northern Sudan, and then actually observed falling precisely where (and when) expected.
MM
this is another reply >After
DVK: And to add what MM said - Check out http://neo.ssa.esa.int. There is a ‘discovery statistics’ section which provides some of the info you asked about. NASA is providing similar information here https://cneos.jpl.nasa.gov/stats/. To see the sky which is currently covered by the survey telescopes, you need to service of the Minor Planet Center which we all work together with: http://www.minorplanetcenter.org, ‘observers’, ‘sky coverage’. That is a tool we use to plan where we look with our telescopes, so it is a more technical page.
Are there any automatic systems for checking large numbers of asteroids orbits, to see if the asteroid's orbit is coming dangerously close to Earth, or is it done by people individually for every asteroid? I ask it because LSST Rubin is coming online soon and you know it will discover a lot of new asteroids. Yes, such systems exist, and monitor all known and newly discovered asteroids in order to predict possible future impacts.
The end result of the process is what we call "risk list": http://neo.ssa.esa.int/risk-page
It is automatically updated every day once new observational data is processed.
MM
What are your favourite sci-fi series? DVK: My favorites are ‘The Expanse’, I also liked watching ‘Salvation’. For the first one I even got my family to give me a new subscription to a known internet streaming service so that I can see the latest episodes. I also loved ‘The Jetsons’ and ‘The Flintstones’ as a kid. Not sure the last one counts as sci-fi though. My long-time favorite was ‘Dark Star’.
this is another reply Big fan of The Expanse at the moment. Nice, hard sci-fi that has a good impression of being grounded in reality - AC
this is another reply When I was a kid I liked The Jetsons, when growing up Star Trek, Star wars and I also used to watch with my sister the 'V'.
HG
When determining the potential threat of a NEA, is the mass of an object a bigger factor or size? I'm asking because I'm curious if a small but massive object (say, with the density of Psyche) could survive atmospheric entry better than a comparatively larger but less massive object. The mass is indeed what really matters, since it’s directly related with the impact energy.
And as you said composition also matters, a metal object would survive atmospheric entry better, not just because it’s heavier, but also because of its internal strength.
MM
What are your thoughts on asteroid mining as portrayed in sci-fi movies? Is it feasible? If so would governments or private space programs be the first to do so?What type of minerals can be found on asteroids that would merit the costs of extraction? Certainly there is valuable stuff you can find on asteroids. For example, the likely easiest material you can harvest from an asteroid would be volatiles such as H2O. Then you have industrial metals, things like Iron, Nickel, and Platinum group metals. Going further, you can break apart many of the oxide minerals you would find to get oxygen (getting you closer to producing rocket fuel in-situ!). Its feasible, but still needs alot of testing both here on Earth and eventually needs to be tested on a target. It may be that governments, via agencies like ESA or NASA, may do it first, to prove the principles somewhat, but I know many commercial entities are also aggresively working towards space mining. To show you that its definitely possible, I'd like to plug the work of colleagues who have processed lunar regolith (which is similar to what you may find on asteroids) to extract both oxygen and metals. Check it out here: http://www.esa.int/ESA_Multimedia/Images/2019/10/Oxygen_and_metal_from_lunar_regolith
AC
Will 2020's climax be a really big rock? DVK: Let's hope not...
Considering NASA, ESA, IAU etc. is working hard to track Earth-grazing asteroids, how come the Chelyabinsk object that airburst over Russia in 2013 came as a total surprise? The Chelyabinsk object came from the direction of the Sun, where unfortunately ground-based telescopes cannot look at. Therefore, it would not have been possible to discover it in advance with current telescopes. Dedicated space telescopes are needed to detect objects coming from this direction in advance.
MM
the below is a reply to the above
Is this to say that it was within specific solid angles for the entire time that we could have observed it given its size and speed? Yes, precisely that. We got unlucky in this case.
MM
Have any of you read Lucifer's Hammer by Larry Niven? In your opinion, how realistic is his depiction of an asteroid strike on Earth? DVK: I have – but really long ago, so I don’t remember the details. But I do remember that I really liked the book, and I remember I always wanted to have a Hot Fudge Sundae when reading it.
I was thinking about the asteroid threat as a teen and came up with this ideas (Hint: they are not equally serious, the level of craziness goes up real quick). Could you please comment on their feasibility? 1. Attaching a rocket engine to an asteroid to make it gradually change trajectory, do that long in advance and it will miss Earth by thousands of km 2. Transporting acid onto asteroid (which are mainly metal), attaching a dome-shaped reaction chamber to it, using heat and pressure to then carry out the chemical reaction to disintegrate asteroids 3. This one is even more terrible than a previous one and totally Dan Brown inspired — transporting antimatter on asteroid, impacting and causing annihilation. Thank you for this AMA and your time! DVK: Well the first one is not so crazy, I have seen it presented... the difficulty is that all asteroids are rotating in one way or another. So if you continuously fire the engine it would not really help. You'd need to switch the engine on and off. Very complex. And landing on an asteroid is challenging too. Just using the 'kinetic impactor' which we will test with DART/Hera (described elsewhere in this chat) is simpler. Another seriously proposed concept is to put a spacecraft next to an asteroid and use an ion engine (like we have on our Mercury mission BepiColombo) to 'push' the asteroid away.
As for 2 and 3 I think I will not live to see that happening ;-)
What is the process to determine the orbit of a newly discovered asteroid? The process is mathematically quite complex, but here's a short summary.
Everything starts with observations, in particular with measurements of the position of an asteroid in the sky, what we call "astrometry". Discovery telescopes extract this information from their discovery images, and make it available to everybody.
These datapoints are then used to calculate possible trajectories ("orbits") that pass through them. At first, with very few points, many orbits will be possible.
Using these orbits we can extrapolate where the asteroid will be located during the following nights, use a telescope to observe that part of the sky, and locate the object again.
From these new observations we can extract new "astrometry", add it to the orbit determination, and see that now only some of the possible orbits will be compatible with the new data. As a result, we now know the trajectory better than before, because a few of the possible orbits are not confirmed by the new data.
The cycle can then continue, with new predictions, new observations, and a more accurate determination of the object's orbit, until it can be determined with an extremely high level of accuracy.
MM
What are some asteroids that are on your "watchlist"? We have exactly that list on our web portal: http://neo.ssa.esa.int/risk-page
It's called "risk list", and it includes all known asteroids for which we cannot exclude a possible impact over the next century. It is updated every day to include newly discovered asteroids, and remove those that have been excluded as possible impactors thanks to new observations.
MM
the below is a reply to the above
That's quite a list!! Do you guys ever feel stressed or afraid when you have to add another dangerous candidate (and by dangerous I mean those above 200m) is added to this Risk List? Yes, when new dangerous ones are added it's important that we immediately do our best to gather more data on them, observing them with telescopes in order to get the information we need to improve our knowledge of their orbit.
And then the satisfaction of getting the data needed to remove one from the list is even greater!
MM
What inspired you to go into this field of study? I was fascinated by astronomy in general since I was a kid, but the actual "trigger" that sparked my interest in NEOs was a wonderful summer course on asteroids organized by a local amateur astronomers association. I immediately decided that I would do my best to turn this passion into my job, and I'm so happy to have been able to make that dream come true.
MM
this is another reply DVK: I started observing meteors when I was 14, just by going outside and looking at the night sky. Since then, small bodies in the solar system were always my passion.
As a layperson, I still think using nuclear weapons against asteroids is the coolest method despite better methods generally being available. Do you still consider the nuclear option the cool option, or has your expertise in the field combined with the real-life impracticalities made it into a laughable/silly/cliche option? DVK: We indeed still study the nuclear option. There are legal aspects though, the ‘outer space treaty’ forbids nuclear explosions in space. But for a large object or one we discover very late it could be useful. That’s why we have to focus on discovering all the objects out there as early as possible – then we have time enough to use more conventional deflection methods, like the kinetic impactor (the DART/Hera scenario).
It seems like doing this well would require international cooperation, particularly with Russia. Have you ever reached out to Russia in your work? Do you have a counterpart organization there that has a similar mission? DVK: Indeed international cooperation is important - asteroids don't know about our borders! We work with a Russian team to perform follow-up observations of recently discovered NEOs. Russia is also involved in the UN-endorsed working groups that we have, IAWN and SMPAG (explained in another answer).
how much can experts tell from a video of a fireball or meteor? Can you work out what it's made of and where it came from? https://www.reddit.com/space/comments/hdf3xe/footage_of_a_meteor_at_barrow_island_australia/?utm_source=share&utm_medium=web2x If multiple videos or pictures, taken from different locations, are available, then it's possible to reconstruct the trajectory, and extrapolate where the object came from.
Regarding the composition, it's a bit more difficult if nothing survives to the ground, but some information can be obtained indirectly from the fireball's color, or its fragmentation behavior. If a spectral analysis of the light can be made, it's then possible to infer the chemical composition in much greater detail.
MM
I've always wanted to know what the best meteorite buying site is and what their average price is?? DVK: Serious dealers will be registered with the 'International Meteorite Collectors Association (IMCA)' - https://www.imca.cc/. They should provide a 'certificate of authenticity' where it says that they are member there. If you are in doubt, you can contact the association and check. Normally there are rough prices for different meteorite types per gram. Rare meteorites will of course be much more expensive than more common ones. Check the IMCA web page to find a dealer close to you.
Just read through Aidans link to the basaltic rock being used as a printing material for lunar habitation. There is a company called Roxul that does stone woven insulation that may be able to shed some light on the research they have done to minimize their similarity to asbestos as potentially carcinogenic materials deemed safe for use in commercial and residential applications. As the interior surfaces will essentially be 3D printed lunar regolith what are the current plans to coat or dampen the affinity for the structure to essentially be death traps for respiratory illness? At least initially, many of these 3d printed regolith structures would not be facing into pressurised sections, but would rather be elements placed outside and around our pressure vessels. Such structures would be things like radiation shields, landing pads or roadways, etc. In the future, if we move towards forming hermetically sealed structures, then your point is a good one. Looking into terrestrial solutions to this problem would be a great start! - AC
What kind of career path does it take to work in the asteroid hunting field? It's probably different for each of us, but here's a short summary of my own path.
I became interested in asteroids, and near-Earth objects in particular, thanks to a wonderful summer course organized by a local amateur astronomers association. Amateur astronomers play a great role in introducing people, and young kids in particular, to these topics.
Then I took physics as my undergrad degree (in Italy), followed by a Ph.D. in astronomy in the US (Hawaii in particular, a great place for astronomers thanks to the exceptional telescopes hosted there).
After finishing the Ph.D. I started my current job at ESA's NEO Coordination Centre, which allowed me to realize my dream of working in this field.
MM
this is another reply DVK: Almost all of us have a Master's degree either in aerospace engineering, mathematics, physics/astronomy/planetary science, or computer science. Some of us - as MM - have a Ph.D. too. But that's not really a requirement. This is true for our team at ESA, but also for other teams in other countries.
What is the likelihood of an asteroid hitting the Earth In the next 200 years? It depends on the size, large ones are rare, while small ones are much more common. You can check this infographics to get the numbers for each size class: https://www.esa.int/ESA_Multimedia/Images/2018/06/Asteroid_danger_explained
MM
Have you played the Earth Defence Force games and if you have, which one is your favourite? No I have not played the Earth Defence Force games, but I just looked it up and I think I would liked it. Which one would you recommend?
HG
How close is too close to earth? Space is a SUPER vast void so is 1,000,000 miles close, 10,000,000? And if an asteroid is big enough can it throw earth off its orbit? DVK: Too close for my taste is when we compute an impact probability > 0 for the object. That means the flyby distance is zero :-) Those are the objects on our risk page http://neo.ssa.esa.int/risk-page.
If an object can alter the orbit of another one, we would call it planet. So unless we have a rogue planet coming from another solar system (verrry unlikely) we are safe from that.
How can I join you when I'm older? DVK: Somebody was asking about our career paths... Study aerospace engineering or math or physics or computer science, get a Masters. Possibly a Ph.D. Then apply for my position when I retire. Check here for how to apply at ESA: https://www.esa.int/About_Us/Careers_at_ESA/Frequently_asked_questions2#HR1
How much is too much? DVK: 42 again
Are you aware of any asteroids that are theoretically within our reach, or will be within our reach at some point, that are carrying a large quantity of shungite? If you're not aware, shungite is like a 2 billion year old like, rock stone that protects against frequencies and unwanted frequencies that may be traveling in the air. I bought a whole bunch of the stuff. Put them around the la casa. Little pyramids, stuff like that. DVK: If I remember my geology properly, Shungite forms in water sedimental deposits. This requires liquid water, i.e. a larger planet. So I don't think there is a high chance to see that on asteroids.
submitted by 500scnds to tabled [link] [comments]

ABI Breaks: Not just about rebuilding

Related reading:
What is ABI, and What Should WG21 Do About It?
The Day The Standard Library Died

Q: What does the C++ committee need to do to fix large swaths of ABI problems?

A: Absolutely nothing

On current implementations, std::unique_ptr's calling convention causes some inefficiencies compared to raw pointers. The standard doesn't dictate the calling convention of std::unique_ptr, so implementers could change that if they chose to.
On current implementations, std::hash will return the same result for the same input, even across program invocations. This makes it vulnerable to cache poisoning attacks. Nothing in the standard requires that different instances of a program produce the same output. An implementation could choose to have a global variable with a per-program-instance seed in it, and have std::hash mix that in.
On current implementations, std::regex is extremely slow. Allegedly, this could be improved substantially without changing the API of std::regex, though most implementations don't change std::regex due to ABI concerns. An implementation could change if it wanted to though. However, very few people have waded into the guts of std::regex and provided a faster implementation, ABI breaking or otherwise. Declaring an ABI break won't make such an implementation appear.
None of these issues are things that the C++ committee claims to have any control over. They are dictated by vendors and by the customers of the vendors. A new vendor could come along and have a better implementation. For customers that prioritize QoI over ABI stability, they could switch and recompile everything.
Even better, the most common standard library implementations are all open source now. You could fork the standard library, tweak the mangling, and be your own vendor. You can then be in control of your own destiny ABI, and without taking the large up-front cost of reinventing the parts of the standard library that you are satisfied with. libc++ has a LIBCXX_ABI_UNSTABLE configuration flag, so that you always get the latest and greatest optimizations. libstdc++ has a --enable-symvers=gnu-versioned-namespace configuration flag that is ABI unstable, and it goes a long way towards allowing multiple libstdc++ instances coexist simultaneously. Currently the libc++ and libstdc++ unstable ABI branches don't have many new optimizations because there aren't many contributions and few people use it. I will choose to be optimistic, and assume that they are unused because people were not aware of them.
If your only concern is ABI, and not API, then vendors and developers can fix this on their own without negatively affecting code portability or conformance. If the QoI gains from an ABI break are worth a few days / weeks to you, then that option is available today.

Q: What aspects of ABI makes things difficult for the C++ committee.

A: API and semantic changes that would require changes to the ABI are difficult for the C++ committee to deal with.

There are a lot of things that you can do to a type or function to make it ABI incompatible with the old type. The C++ committee is reluctant to make these kinds of changes, as they have a substantially higher cost. Changing the layout of a type, adding virtual methods to an existing class, and changing template parameters are the most common operations that run afoul of ABI.

Q: Are ABI changes difficult for toolchain vendors to deal with?

A1: For major vendors, they difficulty varies depending on the magnitude of the break.

Since GCC 5 dealt with the std::string ABI break, GCC has broken the language ABI 6 other times, and most people didn't even notice. There were several library ABI breaks (notably return type changes for std::complex and associative container erase) that went smoothly as well. Quite a few people noticed the GCC 5 std::string ABI changes though.
In some cases, there are compiler heroics that can be done to mitigate an library ABI change. You will get varying responses as to whether this is a worthwhile thing to do, depending on the vendor and the change.
If the language ABI changes in a large way, then it can cause substantially more pain. GCC had a major language ABI change in GCC 3.4, and that rippled out into the library. Dealing with libstdc++.so.5 and libstdc++.so.6 was a major hassle for many people, myself included.

A2: For smaller vendors, the difficulty of an ABI break depends on their customer base.

These days, it's easier than ever to be your own toolchain vendor. That makes you a vendor with excellent insight into how difficult an ABI change would be.

Q: Why don't you just rebuild after an ABI change?

A1: Are you rebuilding the standard library too?

Many people will recommend not passing standard library types around, and not throwing exceptions across shared library boundaries. They often forget that at least one very commonly used shared library does exactly that... your C++ standard library.
On many platforms, there is usually a system C++ standard library. If you want to use that, then you need to deal with standard library types and exceptions going across shared library boundaries. If OS version N+1 breaks ABI in the system C++ standard library, the program you shipped and tested with for OS version N will not work on the upgraded OS until you rebuild.

A2: Sometimes, rebuilding isn't enough

Suppose your company distributes pre-built programs to customers, and this program supports plugins (e.g. Wireshark dissector plugins). If the plugin ABI changes, in the pre-built program, then all of the plugins need to rebuild. The customer that upgrades the program is unlikely to be the one that does the rebuilding, but they will be responsible for upgrading all the plugins as well. The customer cannot effectively upgrade until the entire ecosystem has responded to the ABI break. At best, that takes a lot of time. More likely, some parts of the ecosystem have become unresponsive, and won't ever upgrade.
This also requires upgrading large swaths of a system at once. In certain industries, it is very difficult to convince a customer to upgrade anything at all, and upgrading an entire system would be right out.
Imagine breaking ABI on a system library on a phone. Just getting all of the apps that your company owns upgraded and deployed at the same time as the system library would be a herculean effort, much less getting all the third party apps to upgrade as well.
There are things you can do to mitigate these problems, at least for library and C++ language breaks on Windows, but it's hard to mitigate this if you are relying on a system C++ standard library. Also, these mitigations usually involve writing more error prone code that is less expressive and less efficient than if you just passed around C++ standard library types.

A3: Sometimes you can't rebuild everything.

Sometimes, business models revolve around selling pre-built binaries to other people. It is difficult to coordinate ABI changes across these businesses.
Sometimes, there is a pre-built binary, and the company that provided that binary is no longer able to provide updates, possibly because the company no longer exists.
Sometimes, there is a pre-built binary that is a shared dependency among many companies (e.g. OpenSSL). Breaking ABI on an upgrade of such a binary will cause substantial issues.

Q: What tools do we have for managing ABI changes?

A: Several, but they all have substantial trade-offs.

The most direct tool is to just make a new thing and leave the old one alone. Don't like std::unordered_map? Then make std::open_addressed_hash_map. This technique allows new and old worlds to intermix, but the translations between new and old must be done explicitly. You don't get to just rebuild your program and get the benefits of the new type. Naming the new things becomes increasingly difficult, at least if you decide to not do the "lazy" thing and just name the new class std::unordered_map2 or std2::unordered_map. Personally, I'm fine with slapping a version number on most of these classes, as it gives a strong clue to users that we may need to revise this thing again in the future, and it would mean we might get an incrementally better hash map without needing to wait for hashing research to cease.
inline namespaces are another tool that can be used, but they solve far fewer ABI problems than many think. Upgrading a type like std::string or std::unordered_map via inline namespaces generally wouldn't work, as user types holding the upgraded types would also change, breaking those ABIs. inline namespaces can probably help add / change parameters to functions, and may even extend to updating empty callable objects, but neither of those are issues that have caused many problems in the C++ committee in the past.
Adding a layer of indirection, similar to COM, does a lot to address stability and extensibility, at a large cost to performance. However, one area that the C++ committee hasn't explored much in the past is to look at the places where we already have a layer of indirection, and using COM-like techniques to allow us to add methods in the future. Right now, I don't have a good understanding of the performance trade-offs between the different plug-in / indirect call techniques that we could use for things like std::pmr::memory_resource and std::error_category.

Q: What can I do if I don't want to pay the costs for ABI stability?

A: Be your own toolchain vendor, using the existing open-source libraries and tools.

If you are able to rebuild all your source, then you can point all your builds at a custom standard library, and turn on (or even make your own) ABI breaking changes. You now have a competitive advantage, and you didn't even need to amend an international treaty (the C++ standard) to make it happen! If your changes were only ABI breaking and not API breaking, then you haven't even given up on code portability.
Note that libc++ didn't need to get libstdc++'s permission in order to coexist on Linux. You can have multiple standard libraries at the same time, though there are some technical challenges created when you do that.

Q: What can I do if I want to change the standard in a way that is ABI breaking?

A1: Consider doing things in a non-breaking way.

A2: Talk to compiler vendors and the ABI Review Group (ARG) to see if there is a way to mitigate the ABI break.

A3: Demonstrate that your change is so valuable that the benefit outweighs the cost, or that the cost isn't necessarily that high.

Assorted points to make before people in the comments get them wrong

submitted by ben_craig to cpp [link] [comments]

Fairlearn - A Python package to assess AI system's fairness

In 2015, Claire Cain Miller wrote on The New York Times that there was a widespread belief that software and algorithms that rely on data were objective. Five years later, we know for sure that AI is not free of human influence. Data is created, stored, and processed by people, machine learning algorithms are written and maintained by people, and AI applications simply reflect people’s attitudes and behavior.
Data scientists know that no longer accuracy is the only concern when developing machine learning models, fairness must be considered as well. In order to make sure that machine learning solutions are fair and the value of their predictions easy to understand and explain, it is essential to build tools that developers and data scientists can use to assess their AI system’s fairness and mitigate any observed unfairness issues.
This article will focus on AI fairness, by explaining the following aspects and tools:
  1. Fairlearn: a tool to assess AI system’s fairness and mitigate any observed unfairness issues
  2. How to use Fairlearn in Azure Machine Learning
  3. What we mean by fairness
  4. Fairlearn algorithms
  5. Fairlearn dashboard
  6. Comparing multiple models
  7. Additional resources and how to contribute

1. Fairlearn: a tool to assess AI system’s fairness and mitigate any observed unfairness issues

Fairlearn is a Python package that empowers developers of artificial intelligence (AI) systems to assess their system’s fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as a Jupyter widget for model assessment. The Fairlearn package has two components:
There is also a collection of Jupyter notebooks and an a detailed API guide, that you can check to learn how to leverage Fairlearn for your own data science scenario.

2. How to use Fairlearn in Azure Machine Learning

The Fairlearn package can be installed via:
pip install fairlearn
or optionally with a full feature set by adding extras, e.g. pip install fairlearn[customplots], or you can clone the repository locally via:
git clone [email protected]:fairlearn/fairlearn.git
In Azure Machine Learning, there are a few options to use Jupyter notebooks for your experiments:

a) Get Fairlearn samples on your notebook server

If you’d like to bring your own notebook server for local development, follow these steps:
  1. Use the instructions at Azure Machine Learning SDK to install the Azure Machine Learning SDK for Python
  2. Create an Azure Machine Learning workspace.
  3. Write a configuration file
  4. Clone the GitHub repository.
git clone [email protected]:fairlearn/fairlearn.git
  1. Start the notebook server from your cloned directory.
jupyter notebook
For more information, see Install the Azure Machine Learning SDK for Python.
b) Get Fairlearn samples on DSVM
The Data Science Virtual Machine (DSVM) is a customized VM image built specifically for doing data science. If you create a DSVM, the SDK and notebook server are installed and configured for you. However, you’ll still need to create a workspace and clone the sample repository.
  1. Create an Azure Machine Learning workspace.
  2. Clone the GitHub repository.
git clone [email protected]:fairlearn/fairlearn.git
  1. Add a workspace configuration file to the cloned directory using either of these methods:
  1. Start the notebook server from your cloned directory:
jupyter notebook

3. What we mean by fairness

Fighting against unfairness and discrimination has a long history in philosophy and psychology, and recently in machine learning. However, in order to be able to achieve fairness, we should first define the notion of it. An AI system can behave unfairly for a variety of reasons and many different fairness explanations have been used in literature, making this definition even more challenging. In general, fairness definitions fall under three different categories as follows:
In Fairlearn, we define whether an AI system is behaving unfairly in terms of its impact on people – i.e., in terms of harms. We focus on two kinds of harms:
We follow the approach known as group fairness, which asks: Which groups of individuals are at risk of experiencing harm? The relevant groups need to be specified by the data scientist and are application-specific. Group fairness is formalized by a set of constraints, which require that some aspect (or aspects) of the AI system’s behavior be comparable across the groups. The Fairlearn package enables the assessment and mitigation of unfairness under several common definitions.

4. Fairlearn algorithms

Fairlearn contains the following algorithms for mitigating unfairness in binary classification and regression:
https://preview.redd.it/5fzg767oh5051.png?width=898&format=png&auto=webp&s=731eab09b421c2dd3233ea9e184df136bf066739

5. Fairlearn dashboard

Fairlearn dashboard is a Jupyter notebook widget for assessing how a model’s predictions impact different groups (e.g., different ethnicities), and also for comparing multiple models along different fairness and accuracy metrics.
To assess a single model’s fairness and accuracy, the dashboard widget can be launched within a Jupyter notebook as follows:
from fairlearn.widget import FairlearnDashboard
# A_test containts your sensitive features (e.g., age, binary gender)
# sensitive_feature_names containts your sensitive feature names
# y_true contains ground truth labels
# y_pred contains prediction labels
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['BinaryGender', 'Age'],
y_true=Y_test.tolist(),
y_pred=[y_pred.tolist()])
After the launch, the widget walks the user through the assessment set-up, where the user is asked to select:
  1. the sensitive feature of interest (e.g., binary gender or age)
  2. the accuracy metric (e.g., model precision) along which to evaluate the overall model performance as well as any disparities across groups.
These selections are then used to obtain the visualization of the model’s impact on the subgroups (e.g., model precision for females and model precision for males). The following figures illustrate the set-up steps, where binary gender is selected as a sensitive feature and the accuracy rate is selected as the accuracy metric:
After the set-up, the dashboard presents the model assessment in two panels, as summarized in the table, and visualized in the screenshot below:
https://preview.redd.it/juxlrmrkh5051.png?width=900&format=png&auto=webp&s=d92da30619369f5ab5109834ff7ff4ec3ad7f33d

6. Comparing multiple models

An additional feature that this dashboard offers is the comparison of multiple models, such as the models produced by different learning algorithms and different mitigation approaches, including:
As before, the user is first asked to select the sensitive feature and the accuracy metric. The model comparison view then depicts the accuracy and disparity of all the provided models in a scatter plot. This allows the user to examine trade-offs between algorithm accuracy and fairness. Moreover, each of the dots can be clicked to open the assessment of the corresponding model.
The figure below shows the model comparison view with binary gender selected as a sensitive feature and accuracy rate selected as the accuracy metric.

7. Additional resources and how to contribute

For references and additional resources, please refer to:
To contribute please check this contributing guide.
submitted by frlazzeri to deeplearning [link] [comments]

Fairlearn - A Python package to assess AI system's fairness

Fairlearn - A Python package to assess AI system's fairness
In 2015, Claire Cain Miller wrote on The New York Times that there was a widespread belief that software and algorithms that rely on data were objective. Five years later, we know for sure that AI is not free of human influence. Data is created, stored, and processed by people, machine learning algorithms are written and maintained by people, and AI applications simply reflect people’s attitudes and behavior.
Data scientists know that no longer accuracy is the only concern when developing machine learning models, fairness must be considered as well. In order to make sure that machine learning solutions are fair and the value of their predictions easy to understand and explain, it is essential to build tools that developers and data scientists can use to assess their AI system’s fairness and mitigate any observed unfairness issues.
This article will focus on AI fairness, by explaining the following aspects and tools:
  1. Fairlearn: a tool to assess AI system’s fairness and mitigate any observed unfairness issues
  2. How to use Fairlearn in Azure Machine Learning
  3. What we mean by fairness
  4. Fairlearn algorithms
  5. Fairlearn dashboard
  6. Comparing multiple models
  7. Additional resources and how to contribute

1. Fairlearn: a tool to assess AI system’s fairness and mitigate any observed unfairness issues

Fairlearn is a Python package that empowers developers of artificial intelligence (AI) systems to assess their system’s fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as a Jupyter widget for model assessment. The Fairlearn package has two components:
  • A dashboard for assessing which groups are negatively impacted by a model, and for comparing multiple models in terms of various fairness and accuracy metrics.
  • Algorithms for mitigating unfairness in a variety of AI tasks and along a variety of fairness definitions.
There is also a collection of Jupyter notebooks and an a detailed API guide, that you can check to learn how to leverage Fairlearn for your own data science scenario.

2. How to use Fairlearn in Azure Machine Learning

The Fairlearn package can be installed via:
pip install fairlearn
or optionally with a full feature set by adding extras, e.g. pip install fairlearn[customplots], or you can clone the repository locally via:
git clone [email protected]:fairlearn/fairlearn.git
In Azure Machine Learning, there are a few options to use Jupyter notebooks for your experiments:

a) Get Fairlearn samples on your notebook server

If you’d like to bring your own notebook server for local development, follow these steps:
  1. Use the instructions at Azure Machine Learning SDK to install the Azure Machine Learning SDK for Python
  2. Create an Azure Machine Learning workspace.
  3. Write a configuration file
  4. Clone the GitHub repository.
git clone [email protected]:fairlearn/fairlearn.git
  1. Start the notebook server from your cloned directory.
jupyter notebook
For more information, see Install the Azure Machine Learning SDK for Python.
b) Get Fairlearn samples on DSVM
The Data Science Virtual Machine (DSVM) is a customized VM image built specifically for doing data science. If you create a DSVM, the SDK and notebook server are installed and configured for you. However, you’ll still need to create a workspace and clone the sample repository.
  1. Create an Azure Machine Learning workspace.
  2. Clone the GitHub repository.
git clone [email protected]:fairlearn/fairlearn.git
  1. Add a workspace configuration file to the cloned directory using either of these methods:
  • In the Azure portal, select Download config.json from the Overview section of your workspace.
  • Create a new workspace using code in the configuration.ipynb notebook in your cloned directory
  1. Start the notebook server from your cloned directory:
jupyter notebook

3. What we mean by fairness

Fighting against unfairness and discrimination has a long history in philosophy and psychology, and recently in machine learning. However, in order to be able to achieve fairness, we should first define the notion of it. An AI system can behave unfairly for a variety of reasons and many different fairness explanations have been used in literature, making this definition even more challenging. In general, fairness definitions fall under three different categories as follows:
  • Individual Fairness – Give similar predictions to similar individuals.
  • Group Fairness – Treat different groups equally.
  • Subgroup Fairness – Subgroup fairness intends to obtain the best properties of the group and individual notions of fairness.
In Fairlearn, we define whether an AI system is behaving unfairly in terms of its impact on people – i.e., in terms of harms. We focus on two kinds of harms:
  • Allocation harms. These harms can occur when AI systems extend or withhold opportunities, resources, or information. Some of the key applications are in hiring, school admissions, and lending.
  • Quality-of-service harms. Quality of service refers to whether a system works as well for one person as it does for another, even if no opportunities, resources, or information are extended or withheld.
We follow the approach known as group fairness, which asks: Which groups of individuals are at risk of experiencing harm? The relevant groups need to be specified by the data scientist and are application-specific. Group fairness is formalized by a set of constraints, which require that some aspect (or aspects) of the AI system’s behavior be comparable across the groups. The Fairlearn package enables the assessment and mitigation of unfairness under several common definitions.

4. Fairlearn algorithms

Fairlearn contains the following algorithms for mitigating unfairness in binary classification and regression:
https://preview.redd.it/2inmvd6g75051.png?width=899&format=png&auto=webp&s=3386410974a9e3640ef8ef8a409a2f19f989330a

5. Fairlearn dashboard

Fairlearn dashboard is a Jupyter notebook widget for assessing how a model’s predictions impact different groups (e.g., different ethnicities), and also for comparing multiple models along different fairness and accuracy metrics.
To assess a single model’s fairness and accuracy, the dashboard widget can be launched within a Jupyter notebook as follows:
from fairlearn.widget import FairlearnDashboard
# A_test containts your sensitive features (e.g., age, binary gender)
# sensitive_feature_names containts your sensitive feature names
# y_true contains ground truth labels
# y_pred contains prediction labels
FairlearnDashboard(sensitive_features=A_test,
sensitive_feature_names=['BinaryGender', 'Age'],
y_true=Y_test.tolist(),
y_pred=[y_pred.tolist()])
After the launch, the widget walks the user through the assessment set-up, where the user is asked to select:
  1. the sensitive feature of interest (e.g., binary gender or age)
  2. the accuracy metric (e.g., model precision) along which to evaluate the overall model performance as well as any disparities across groups.
These selections are then used to obtain the visualization of the model’s impact on the subgroups (e.g., model precision for females and model precision for males). The following figures illustrate the set-up steps, where binary gender is selected as a sensitive feature and the accuracy rate is selected as the accuracy metric:
After the set-up, the dashboard presents the model assessment in two panels, as summarized in the table, and visualized in the screenshot below:

https://preview.redd.it/enskhh7i75051.png?width=900&format=png&auto=webp&s=db98cb058029655757df1946e42bca4831170451

6. Comparing multiple models

An additional feature that this dashboard offers is the comparison of multiple models, such as the models produced by different learning algorithms and different mitigation approaches, including:
  • fairlearn.reductions.GridSearch
  • fairlearn.reductions.ExponentiatedGradient
  • fairlearn.postprocessing.ThresholdOptimizer
As before, the user is first asked to select the sensitive feature and the accuracy metric. The model comparison view then depicts the accuracy and disparity of all the provided models in a scatter plot. This allows the user to examine trade-offs between algorithm accuracy and fairness. Moreover, each of the dots can be clicked to open the assessment of the corresponding model.
The figure below shows the model comparison view with binary gender selected as a sensitive feature and accuracy rate selected as the accuracy metric.

7. Additional resources and how to contribute

For references and additional resources, please refer to:
To contribute please check this contributing guide.
submitted by frlazzeri to learnmachinelearning [link] [comments]

Freestanding in Prague

Freestanding in Prague

The C++ standards committee met in Prague, Czech Republic between Feb 10 and Feb 15. The standard is wording complete, and the only thing between here and getting it published is ISO process. As is typical for me at these meetings, I spent a lot of time doing freestanding things, Library Incubator (LEWGI) things, and minuting along the way (15-ish sessions/papers!).

Freestanding

I had three freestanding papers coming into this meeting:
The first two papers are pieces of my former "P0829: Freestanding Proposal" paper, and had been seen by the Feature Test study group in Belfast. During this meeting, I got to run them by the Library Incubator for some design feedback. The papers were received well, though some potential danger points still exist. Library Evolution can look at the papers as soon as they have time.
P2013 is the first smaller piece taken out of "P1105: Leaving no room for a lower-level language: A C++ Subset". Exceptions are probably the most important thing in P1105, but there's so much activity going on in this area that it is hard for me to make good recommendations. The next highest priority was new and delete, hence P2013 being born. I also felt that P2013 was a good test paper to see if the committee was willing to make any language based change for freestanding.
I had presented P2013 in a prior Low Latency / SG14 telecon, and received unanimous approval (no neutral, no against votes). I was able to present it in the Evolution Incubator, and received no against votes. Then, in a surprisingly quick turnaround, I was able to present to Evolution, and again received no against votes. So now I just need to come up with wording that accomplishes my goals, without breaking constant evaluated new.

Errors and ABI

On Monday, we held a join session between Evolution and Library Evolution to talk about one of the C++ boogeymen, ABI. P1836 and P2028 have good background reading if you are not familiar with the topic. The usual arguments were raised, including that we are losing out on performance by preserving ABI, and that breaking ABI would mean abandoning some software that cannot be rebuilt today. We took some polls, and I fear that each person will interpret the polls differently. The way I interpreted the polls is that we won't do a "big" ABI break anytime soon, but we will be more willing to consider compiler heroics in order to do ABI breaks in the library.
One ABI area that is frequently overlooked is the situation that I am in. I can rebuild all of my source code, but even despite that I still care about ABI because I don't ship all of it together. I build a library with a plugin architecture, and breaking ABI would mean updating all the plugins on customer systems simultaneously... which is no easy task. I also ship binaries on Linux systems. We would prefer to be able to use new C++ features, despite targeting the various "LTS" distributions. ABI stability is a big part of that. I am hoping to make another post to cpp with my thoughts in the next few months, tentatively titled "ABI Breaks: Not just about rebuilding".
On Tuesday, LEWG discussed "P1656: 'Throws: Nothing' should be noexcept". This is a substantial change to the policy laid out in N3279, authored by Alisdair Meredith. That's why it is informally called the "Lakos" rule. We discussed the trade-offs involved, including how adding noexcept can constrain future changes, how noexcept can make precondition tests more difficult, and how this will change little in practice, because implementers already mark most "Throws: Nothing" calls as noexcept. Arguments about performance, code bloat, and standards guaranteed portability won out though. This paper was "only" a policy change, so a follow-on paper will need to be authored by someone in order to actually do the noexcept marking.
Wednesday night we had a social event celebrating the impending C++20 release. The event was held in the Prague Crossroads, built in 927 A.D.. The large tables let us have conversations with people we may not have really bumped into during the rest of the meeting. I started talking exceptions with a few of the other people at the table, and one of the had some particularly in depth knowledge about the topic. As it turns out, I was sitting at the same table as James Renwick of Low-cost Deterministic C++ Exceptions for Embedded Systems fame. I ended up talking his ear off over the course of the night.
Thursday in LEWG, we talked about Niall Douglas's "P1028: SG14 status_code and standard error object". This is the class that may one day be thrown by P0709 "Static" exceptions. Coincidentally, the most contentious parts were issues involving ABI. In several of the virtual interfaces in the standard, we've wanted to add things later, but haven't been able to do so.
Friday, James Renwick was able to present his paper, and the room was very receptive of it. One of my concerns going in to the presentation was that the committee would be unwilling to change anything in the standard related to today's exceptions. After the presentation and discussion, I'm less concerned about that. There was definitely a willingness to make some changes... but one of the big challenges is a question of whether we change default behavior in some cases, or change language ABI, even for C.

Other papers

P1385: "High level" Linear Algebra

This one is the "high level" linear algebra paper. There's a different, "lower level" linear algebra paper (P1673) that covers BLAS use cases. P1385 is intended to be something that can sit on top of P1673, if I understand correctly.
For being a math paper, there was surprisingly little math discussion in Library Incubator. We were generally discussing general interface issues like object ownership, concept requirements, and how to spell various operations, particularly inner product and outer product.

P1935: Physical Units

We are still in the philosophy and goals stage of this paper. We got to discuss the finer points of the distinctions between "kilogram" and "1 kilogram"; the difference between a unit, a dimension, and a quantity; and the difference between systems and models.
This paper is challenging in that there is significant prior art, as well as strong opinions about "the right way" to do things. This gets to one of the trickier parts of standards meetings... driving consensus. The interested parties have been requested to (preferably) work together outside of the three meetings a year, or failing that, to write a paper that gives some outline of what a solution should look like.
This paper also has an absurdly awesome / terrifying metaprogramming trick in it. A base class uses a friend declaration to declare (but not define) a function with an auto return type and no trailing return value. The derived class then declares and defines the function (again via friend) and lets the definition of the function determine the auto return type. This lets the base class use decltype to pull type information out of the derived class without explicitly passing that information down in a template argument (sorcery!). The main caveat with this trick is that it only works with exactly one derived class, as otherwise you end up with multiple conflicting definitions of the same function.

Concurrent Queues, P0260 and P1958

It's amazing what a minor paper reorg will do for productivity. This pair of papers used to be a single paper in the San Diego time frame, and we had a difficult time understanding how the pieces worked together. With the paper split as it is now, we have a small, concrete piece to review, which we were then able to see how it fit in to the interfaces and concepts of the larger paper. We got to dig in to some corner case traps with exception safety, move semantics, and race conditions. There were implementers in the room that could say what their implementation did, and I feel that the room was able to give good feedback to the authors.

P1944: constexpr and

Antony Polukhin is secretly my accidental nemesis (well, not so secret anymore). Over the course of C++20, he sprinkled constexpr on many of the things. As it turns out, there is a large (but not 100%) overlap of constexpr and freestanding. Each thing that went constexpr turned into a merge conflict that I got to resolve in my papers.
And he's still at it!
In this case, 100% of the things that were constexpr'd were also things that I have previously identified as being potentially freestanding. So that's a positive. There were concerns about implementability though, as sometimes, the C library and the C++ library come from different vendors, and having forwarding wrappers is far from trivial.

A minute about minuting

For the wg21 readers out there, if you think you are bad at taking minutes, that just means you need more practice :) . If you find yourself in a room that is about to review a paper that you are not heavily invested in, volunteer to take minutes. That way you can make a valuable contribution, even for an area where you don't have domain expertise.
As a bonus, you get to follow the minuter's code (something I just made up) about spelling other people's names. As the person taking minutes, you have license to change up to three letters in someone's name, so long as it isn't used maliciously. You can freely take any double letter in a name and convert it to a single letter (e.g. Connor -> Conor), turn a single letter to a double letter (David -> Davvid), or completely rearrange any consecutive series of vowels. And people will thank you for it! You are also given free license to interrupt people in order to ask them who they are. Give it a try!

Closing

I've got a bunch of papers to write for the next mailing, and I won't even be in Varna. So if you're interested in championing some freestanding papers, let me know, and I can coach you on the topics.
submitted by ben_craig to cpp [link] [comments]

Using Deep Learning to Predict Earnings Outcomes

Using Deep Learning to Predict Earnings Outcomes
(Note: if you were following my earlier posts, I wrote a note at the end of this post explaining why I deleted old posts and what changed)
TLDR:
Not financial advice.
  • I created a deep learning algorithm trained on 2015-2019 data to predict whether a company will beat earning estimates.
  • Algorithm has an accuracy of 58%.
  • I need data and suggestions.
  • I’ll be making daily posts for upcoming earnings.
Greetings everyone,
I’m Bunga, an engineering PhD student at well known university. Like many of you, I developed an interest in trading because of the coronavirus. I lost a lot of money by being greedy and uninformed about how to actually trade options. With all the free time I have with my research slowing down because of the virus, I’ve decided to use what I’m good at (being a nerd, data analytics, and machine learning) to help me make trades.
One thing that stuck out to me was how people make bets on earnings reports. As a practitioner of machine learning, we LOVE binary events since the problem can be reduced to a simple binary classification problem. With that being said, I sought out to develop a machine learning algorithm to predict whether a company will beat earnings estimates.
I strongly suggest TO NOT USE THIS AS FINANCIAL ADVICE. Please, I could just be a random guy on the internet making things up, and I could have bugs in my code. Just follow along for some fun and don’t make any trades based off of this information 😊
Things other people have tried:
A few other projects have tried to do this to some extent [1,2,3], but some are not directly predicting the outcome of the earnings report or have a very small sample size of a few companies.
The data
This has been the most challenging part of the project. I’m using data for 4,000 common stocks.
Open, high, low, close, volume stock data is often free and easy to come by. I use stock data during the quarter (Jan 1 – Mar 31 stock data for Q1 for example) in a time series classifier. I also incorporate “background” data from several ETFs to give the algorithm a feel for how the market is doing overall (hopefully this accounts for bull/bear markets when making predictions).
I use sentiment analyses extracted from 10K/10Q documents from the previous quarter as described in [4]. This gets passed to a multilayer perceptron neural network.
Data that I’ve tried and doesn’t work to well:
Scraping 10K/10Q manually for US GAAP fields like Assets, Cash, StockholdersEquity, etc. Either I’m not very good at processing the data or most of the tables are incomplete, this doesn’t work well. However, I recently came across this amazing API [5] which will ameliorate most of these problems, and I plan on incorporating this data sometime this week.
Results
After training on about 34,000 data points, the model achieves a 58% accuracy on the test data. Class 1 is beat earnings, Class 2 is miss earnings.. Scroll to the bottom for the predictions for today’s AMC estimates.

https://preview.redd.it/qmeig6of3tv41.png?width=875&format=png&auto=webp&s=c8ba4a34294b7388bf1b9e64150d7375da959ac2
Future Directions
Things I’m going to try:
  • Financial twitter sentiment data (need data for this)
  • Data on options (ToS apparently has stuff that you can use)
  • Using data closer to the earnings report itself rather than just the data within the quarterly date
A note to the dozen people who were following me before
Thank you so much for the early feedback and following. I had a bug in my code which was replicating datapoints, causing my accuracy to be way higher in reality. I’ve modified some things to make the network only output a single value, and I’ve done a lot of bug fixing.
Predictions for 4/29/20 AMC:
A value closer to 1 means that the company will be more likely to beat earnings estimates. Closer to 0 means the company will be more likely to miss earnings estimates. (People familiar with machine learning will note that neural networks don’t actually output a probability distribution so the values don’t actually represent a confidence).
  • Tkr: AAPL NN: 0.504
  • Tkr: AMZN NN: 0.544
  • Tkr: UAL NN: 0.438
  • Tkr: GILD NN: 0.532
  • Tkr: TNDM NN: 0.488
  • Tkr: X NN: 0.511
  • Tkr: AMGN NN: 0.642
  • Tkr: WDC NN: 0.540
  • Tkr: WHR NN: 0.574
  • Tkr: SYK NN: 0.557
  • Tkr: ZEN NN: 0.580
  • Tkr: MGM NN: 0.452
  • Tkr: ILMN NN: 0.575
  • Tkr: MOH NN: 0.500
  • Tkr: FND NN: 0.542
  • Tkr: TWOU NN: 0.604
  • Tkr: OSIS NN: 0.487
  • Tkr: CXO NN: 0.470
  • Tkr: BLDR NN: 0.465
  • Tkr: CASA NN: 0.568
  • Tkr: COLM NN: 0.537
  • Tkr: COG NN: 0.547
  • Tkr: SGEN NN: 0.486
  • Tkr: FMBI NN: 0.496
  • Tkr: PSA NN: 0.547
  • Tkr: BZH NN: 0.482
  • Tkr: LOCO NN: 0.575
  • Tkr: DLA NN: 0.460
  • Tkr: SSNC NN: 0.524
  • Tkr: SWN NN: 0.476
  • Tkr: RMD NN: 0.499
  • Tkr: VKTX NN: 0.437
  • Tkr: EXPO NN: 0.526
  • Tkr: BL NN: 0.516
  • Tkr: FTV NN: 0.498
  • Tkr: ASGN NN: 0.593
  • Tkr: KNSL NN: 0.538
  • Tkr: RSG NN: 0.594
  • Tkr: EBS NN: 0.483
  • Tkr: PRAH NN: 0.598
  • Tkr: RRC NN: 0.472
  • Tkr: ICBK NN: 0.514
  • Tkr: LPLA NN: 0.597
  • Tkr: WK NN: 0.630
  • Tkr: ATUS NN: 0.530
  • Tkr: FBHS NN: 0.587
  • Tkr: SWI NN: 0.521
  • Tkr: TRUP NN: 0.570
  • Tkr: AJG NN: 0.509
  • Tkr: BAND NN: 0.618
  • Tkr: DCO NN: 0.514
  • Tkr: BRKS NN: 0.490
  • Tkr: BY NN: 0.502
  • Tkr: CUZ NN: 0.477
  • Tkr: EMN NN: 0.532
  • Tkr: VICI NN: 0.310
  • Tkr: GLPI NN: 0.371
  • Tkr: MTZ NN: 0.514
  • Tkr: SEM NN: 0.405
  • Tkr: SPSC NN: 0.465
[1] https://towardsdatascience.com/forecasting-earning-surprises-with-machine-learning-68b2f2318936
[2] https://zicklin.baruch.cuny.edu/wp-content/uploads/sites/10/2019/12/Improving-Earnings-Predictions-with-Machine-Learning-Hunt-Myers-Myers.pdf
[3] https://www.euclidean.com/better-than-human-forecasts
[4] https://cran.r-project.org/web/packages/edgaedgar.pdf.
[5] https://financialmodelingprep.com/developedocs/
submitted by xXx_Bunga_xXx to u/xXx_Bunga_xXx [link] [comments]

ForexBit Review

Overview:

The name of this broker ForexBit suggests that the broker deals with the exchange of Forex, Cryptos and provides Contracts-for-Difference. The broker does not mention any account types on its website but shows some investment plans. The plans offered show growth in investments on an hourly basis. The website looks attractive but also seems misguiding. This ForexBit review will shed light on the characteristics and offerings of this broker. Don’t forget to follow this review completely for the sake of your investments.

About ForexBit:

The broker ForexBit offers trade-in FX and binary options. The assets provided by them are very broad. The assets consist of cryptos, indexes, lots of commodities, shares, bonds, and futures. The crypto-coin portfolio of this broker is also very wide and contains all major cryptos like Bitcoin, Ethereum, Ripple, Litecoin, Dash, and minor ones like IOTA, ZCash, Ada, NEO, Bitcoin Cash, Stellar Lumens, and several others. The official website claims that potential customers of ForexBit are provided with MetaTrader5 trading platform.
The domain of this broker does not furnish information about its owner or manager. But interestingly it provides a company number on the top side of the website. When clicked on it, it redirects to a pdf file that mentions the owner's name and other details. The name of the owner turns out to be Donald Brian and a UK based address. Not surprisingly enough, such documentation and information must be treated as scam and misleading. No genuine broker has such a witty information system. Furthermore, the Financial Conduct Authority in the UK has blacklisted this shady broker on its website. So, it is clear that the broker ForexBit is unlicensed and unregulated. And its potential clients are prone to scam and their funds are not in the safe hands.
The initial investment required starts from $20 to $2500 according to the plans. The level 1 plan offers a 10% growth in 8 hours with a referral of 5%. The level 2 plan offers a 15% growth in 8 hours with a referral of 5%. The level 3 plan offers a 30% growth in 7 hours with a referral of 7%. And the advance plan offers a 55% growth in investment in just 4 hours with a referral of 8%. But the question of how ForexBit will achieve such high profit in such a less time is unanswered.

Is ForexBit scam or legit?

The answer to this question is straight forward, the broker ForexBit is a scam. The information provided on the website does not fulfill any trading criteria. It only asks for the investments. Furthermore, the great strategy for gaining such a huge profit in very less time is also not mentioned anywhere. The provided information on its owner is as shady as it gets. The referral system present makes it clear that the broker is not genuine and trying to make money merely by trader's investments and their referrals. Stay away from this cryptocurrency scam.
submitted by fraudbrokers to u/fraudbrokers [link] [comments]

How To Trade Binary Option Wisely Without Loss 2020 [ with strong Indicators 100% winning strategy ] binary options trading strategy that works pdf - 2020 - 2 ... 5 minute binary options trading strategy pdf - new 5 min binary option strategy 2020.. 98% win rate How to trade Binary Options for beginners - Binary Options 101 How To Trade Options For Beginners

Learn Binary Options Trading Course. In binary options trading, each trade will eventually settle at $0 or $100. Trading binary options and CFDs on Synthetic Indices is classified as a gambling activity. The way learn to trade binary options pdf it … Binary options are complex, exotic trade options, but these are particularly simple to How to make a Binary Options trade Making a traditional binary option trade involves taking a series of steps as follows: Choose from among the available underlying assets, such as currency pairs, stocks, indices, and commodities. Select an expiry time frame for the binary. The trading binary options ‘Abe Cofnas’ pdf is particularly popular. Forums & chat rooms – This is the perfect place to brainstorm ideas with binary options gurus. You can benefit from recommendations and learn in real-time whilst investing in your binary options. A binary options trade usably involved three steps: First, you choose a trade expiration time, this is the time you want the trade to end. It could be any time period between a minute and a week - usably it is within the day. Second, you choose Call or Put. If you think the price will end up above the current A Complete Guide to Binary Options Trading By Meir Liraz

[index] [14599] [20704] [27229] [7255] [6543] [5014] [1849] [6400] [1197] [18964]

How To Trade Binary Option Wisely Without Loss 2020 [ with strong Indicators 100% winning strategy ]

Binary trading strategy and simple technical analysis for beginners to increase win rate trading 3 - 5 minute binary options signals. Get FX Master Code signals: https://tradingwalk.com ... This might be the best way to trade binary options trading because of the straight to the point method. Learn our binary options trading strategy – best 60 seconds strategies!.. Chapter 1 - Introduction to binary options trading: brokers, how it works, example of trade Chapter 2 - Bid/offer levels from the brokers: what it means in terms of probabilities to end up in the ... How i trade 5 minutes binary options with my 5 minutes binary options strategy 90 - 95% winning (100% profit guaranteed) i get a lot of winning trades and the indicator arrow does not repaint or ... 60 seconds binary options strategy 99 - 100% winning (100% profit guaranteed).Hey guys today i will show a 1-minute trading strategy that you can use today on any binary options broker you can ...

Flag Counter