Why Computers Use Binary - Nookkin.com

No gods, no kings, only NOPE - or divining the future with options flows. [Part 2: A Random Walk and Price Decoherence]

tl;dr -
1) Stock prices move continuously because different market participants end up having different ideas of the future value of a stock.
2) This difference in valuations is part of the reason we have volatility.
3) IV crush happens as a consequence of future possibilities being extinguished at a binary catalyst like earnings very rapidly, as opposed to the normal slow way.
I promise I'm getting to the good parts, but I'm also writing these as a guidebook which I can use later so people never have to talk to me again.
In this part I'm going to start veering a bit into the speculation territory (e.g. ideas I believe or have investigated, but aren't necessary well known) but I'm going to make sure those sections are properly marked as speculative (and you can feel free to ignore/dismiss them). Marked as [Lily's Speculation].
As some commenters have pointed out in prior posts, I do not have formal training in mathematical finance/finance (my background is computer science, discrete math, and biology), so often times I may use terms that I've invented which have analogous/existing terms (e.g. the law of surprise is actually the first law of asset pricing applied to derivatives under risk neutral measure, but I didn't know that until I read the papers later). If I mention something wrong, please do feel free to either PM me (not chat) or post a comment, and we can discuss/I can correct it! As always, buyer beware.
This is the first section also where you do need to be familiar with the topics I've previously discussed, which I'll add links to shortly (my previous posts:
1) https://www.reddit.com/thecorporation/comments/jck2q6/no_gods_no_kings_only_nope_or_divining_the_future/
2) https://www.reddit.com/thecorporation/comments/jbzzq4/why_options_trading_sucks_or_the_law_of_surprise/
---
A Random Walk Down Bankruptcy
A lot of us have probably seen the term random walk, maybe in the context of A Random Walk Down Wall Street, which seems like a great book I'll add to my list of things to read once I figure out how to control my ADD. It seems obvious, then, what a random walk means - when something is moving, it basically means that the next move is random. So if my stock price is $1 and I can move in $0.01 increments, if the stock price is truly randomly walking, there should be roughly a 50% chance it moves up in the next second (to $1.01) or down (to $0.99).
If you've traded for more than a hot minute, this concept should seem obvious, because especially on the intraday, it usually isn't clear why price moves the way it does (despite what chartists want to believe, and I'm sure a ton of people in the comments will tell me why fettucini lines and Batman doji tell them things). For a simple example, we can look at SPY's chart from Friday, Oct 16, 2020:

https://preview.redd.it/jgg3kup9dpt51.png?width=1368&format=png&auto=webp&s=bf8e08402ccef20832c96203126b60c23277ccc2
I'm sure again 7 different people can tell me 7 different things about why the chart shape looks the way it does, or how if I delve deeply enough into it I can find out which man I'm going to marry in 2024, but to a rationalist it isn't exactly apparent at why SPY's price declined from 349 to ~348.5 at around 12:30 PM, or why it picked up until about 3 PM and then went into precipitous decline (although I do have theories why it declined EOD, but that's for another post).
An extremely clever or bored reader from my previous posts could say, "Is this the price formation you mentioned in the law of surprise post?" and the answer is yes. If we relate it back to the individual buyer or seller, we can explain the concept of a stock price's random walk as such:
Most market participants have an idea of an asset's true value (an idealized concept of what an asset is actually worth), which they can derive using models or possibly enough brain damage. However, an asset's value at any given time is not worth one value (usually*), but a spectrum of possible values, usually representing what the asset should be worth in the future. A naive way we can represent this without delving into to much math (because let's face it, most of us fucking hate math) is:
Current value of an asset = sum over all (future possible value multiplied by the likelihood of that value)
In actuality, most models aren't that simple, but it does generalize to a ton of more complicated models which you need more than 7th grade math to understand (Black-Scholes, DCF, blah blah blah).
While in many cases the first term - future possible value - is well defined (Tesla is worth exactly $420.69 billion in 2021, and maybe we all can agree on that by looking at car sales and Musk tweets), where it gets more interesting is the second term - the likelihood of that value occurring. [In actuality, the price of a stock for instance is way more complicated, because a stock can be sold at any point in the future (versus in my example, just the value in 2021), and needs to account for all values of Tesla at any given point in the future.]
How do we estimate the second term - the likelihood of that value occurring? For this class, it actually doesn't matter, because the key concept is this idea: even with all market participants having the same information, we do anticipate that every participant will have a slightly different view of future likelihoods. Why is that? There's many reasons. Some participants may undervalue risk (aka WSB FD/yolos) and therefore weight probabilities of gaining lots of money much more heavily than going bankrupt. Some participants may have alternative data which improves their understanding of what the future values should be, therefore letting them see opportunity. Some participants might overvalue liquidity, and just want to GTFO and thereby accept a haircut on their asset's value to quickly unload it (especially in markets with low liquidity). Some participants may just be yoloing and not even know what Fastly does before putting their account all in weekly puts (god bless you).
In the end, it doesn't matter either the why, but the what: because of these diverging interpretations, over time, we can expect the price of an asset to drift from the current value even with no new information added. In most cases, the calculations that market participants use (which I will, as a Lily-ism, call the future expected payoff function, or FEPF) ends up being quite similar in aggregate, and this is why asset prices likely tend to move slightly up and down for no reason (or rather, this is one interpretation of why).
At this point, I expect the 20% of you who know what I'm talking about or have a finance background to say, "Oh but blah blah efficient market hypothesis contradicts random walk blah blah blah" and you're correct, but it also legitimately doesn't matter here. In the long run, stock prices are clearly not a random walk, because a stock's value is obviously tied to the company's fundamentals (knock on wood I don't regret saying this in the 2020s). However, intraday, in the absence of new, public information, it becomes a close enough approximation.
Also, some of you might wonder what happens when the future expected payoff function (FEPF) I mentioned before ends up wildly diverging for a stock between participants. This could happen because all of us try to short Nikola because it's quite obviously a joke (so our FEPF for Nikola could, let's say, be 0), while the 20 or so remaining bagholders at NikolaCorporation decide that their FEPF of Nikola is $10,000,000 a share). One of the interesting things which intuitively makes sense, is for nearly all stocks, the amount of divergence among market participants in their FEPF increases substantially as you get farther into the future.
This intuitively makes sense, even if you've already quit trying to understand what I'm saying. It's quite easy to say, if at 12:51 PM SPY is worth 350.21 that likely at 12:52 PM SPY will be worth 350.10 or 350.30 in all likelihood. Obviously there are cases this doesn't hold, but more likely than not, prices tend to follow each other, and don't gap up/down hard intraday. However, what if I asked you - given SPY is worth 350.21 at 12:51 PM today, what will it be worth in 2022?
Many people will then try to half ass some DD about interest rates and Trump fleeing to Ecuador to value SPY at 150, while others will assume bull markets will continue indefinitely and SPY will obviously be 7000 by then. The truth is -- no one actually knows, because if you did, you wouldn't be reading a reddit post on this at 2 AM in your jammies.
In fact, if you could somehow figure out the FEPF of all market participants at any given time, assuming no new information occurs, you should be able to roughly predict the true value of an asset infinitely far into the future (hint: this doesn't exactly hold, but again don't @ me).
Now if you do have a finance background, I expect gears will have clicked for some of you, and you may see strong analogies between the FEPF divergence I mentioned, and a concept we're all at least partially familiar with - volatility.
Volatility and Price Decoherence ("IV Crush")
Volatility, just like the Greeks, isn't exactly a real thing. Most of us have some familiarity with implied volatility on options, mostly when we get IV crushed the first time and realize we just lost $3000 on Tesla calls.
If we assume that the current price should represent the weighted likelihoods of all future prices (the random walk), volatility implies the following two things:
  1. Volatility reflects the uncertainty of the current price
  2. Volatility reflects the uncertainty of the future price for every point in the future where the asset has value (up to expiry for options)
[Ignore this section if you aren't pedantic] There's obviously more complex mathematics, because I'm sure some of you will argue in the comments that IV doesn't go up monotonically as option expiry date goes longer and longer into the future, and you're correct (this is because asset pricing reflects drift rate and other factors, as well as certain assets like the VIX end up having cost of carry).
Volatility in options is interesting as well, because in actuality, it isn't something that can be exactly computed -- it arises as a plug between the idealized value of an option (the modeled price) and the real, market value of an option (the spot price). Additionally, because the makeup of market participants in an asset's market changes over time, and new information also comes in (thereby increasing likelihood of some possibilities and reducing it for others), volatility does not remain constant over time, either.
Conceptually, volatility also is pretty easy to understand. But what about our friend, IV crush? I'm sure some of you have bought options to play events, the most common one being earnings reports, which happen quarterly for every company due to regulations. For the more savvy, you might know of expected move, which is a calculation that uses the volatility (and therefore price) increase of at-the-money options about a month out to calculate how much the options market forecasts the underlying stock price to move as a response to ER.
Binary Catalyst Events and Price Decoherence
Remember what I said about price formation being a gradual, continuous process? In the face of special circumstances, in particularly binary catalyst events - events where the outcome is one of two choices, good (1) or bad (0) - the gradual part gets thrown out the window. Earnings in particular is a common and notable case of a binary event, because the price will go down (assuming the company did not meet the market's expectations) or up (assuming the company exceeded the market's expectations) (it will rarely stay flat, so I'm not going to address that case).
Earnings especially is interesting, because unlike other catalytic events, they're pre-scheduled (so the whole market expects them at a certain date/time) and usually have publicly released pre-estimations (guidance, analyst predictions). This separates them from other binary catalysts (e.g. FSLY dipping 30% on guidance update) because the market has ample time to anticipate the event, and participants therefore have time to speculate and hedge on the event.
In most binary catalyst events, we see rapid fluctuations in price, usually called a gap up or gap down, which is caused by participants rapidly intaking new information and changing their FEPF accordingly. This is for the most part an anticipated adjustment to the FEPF based on the expectation that earnings is a Very Big Deal (TM), and is the reason why volatility and therefore option premiums increase so dramatically before earnings.
What makes earnings so interesting in particular is the dramatic effect it can have on all market participants FEPF, as opposed to let's say a Trump tweet, or more people dying of coronavirus. In lots of cases, especially the FEPF of the short term (3-6 months) rapidly changes in response to updated guidance about a company, causing large portions of the future possibility spectrum to rapidly and spectacularly go to zero. In an instant, your Tesla 10/30 800Cs go from "some value" to "not worth the electrons they're printed on".
[Lily's Speculation] This phenomena, I like to call price decoherence, mostly as an analogy to quantum mechanical processes which produce similar results (the collapse of a wavefunction on observation). Price decoherence occurs at a widespread but minor scale continuously, which we normally call price formation (and explains portions of the random walk derivation explained above), but hits a special limit in the face of binary catalyst events, as in an instant rapid portions of the future expected payoff function are extinguished, versus a more gradual process which occurs over time (as an option nears expiration).
Price decoherence, mathematically, ends up being a more generalizable case of the phenomenon we all love to hate - IV crush. Price decoherence during earnings collapses the future expected payoff function of a ticker, leading large portions of the option chain to be effectively worthless (IV crush). It has interesting implications, especially in the case of hedged option sellers, our dear Market Makers. This is because given the expectation that they maintain delta-gamma neutral, and now many of the options they have written are now worthless and have 0 delta, what do they now have to do?
They have to unwind.
[/Lily's Speculation]
- Lily
submitted by the_lilypad to thecorporation [link] [comments]

No gods, no kings, only NOPE - or divining the future with options flows. [Part 3: Hedge Winding, Unwinding, and the NOPE]

Hello friends!
We're on the last post of this series ("A Gentle Introduction to NOPE"), where we get to use all the Big Boy Concepts (TM) we've discussed in the prior posts and put them all together. Some words before we begin:
  1. This post will be massively theoretical, in the sense that my own speculation and inferences will be largely peppered throughout the post. Are those speculations right? I think so, or I wouldn't be posting it, but they could also be incorrect.
  2. I will briefly touch on using the NOPE this slide, but I will make a secondary post with much more interesting data and trends I've observed. This is primarily for explaining what NOPE is and why it potentially works, and what it potentially measures.
My advice before reading this is to glance at my prior posts, and either read those fully or at least make sure you understand the tl;drs:
https://www.reddit.com/thecorporation/collection/27dc72ad-4e78-44cd-a788-811cd666e32a
Depending on popular demand, I will also make a last-last post called FAQ, where I'll tabulate interesting questions you guys ask me in the comments!
---
So a brief recap before we begin.
Market Maker ("Mr. MM"): An individual or firm who makes money off the exchange fees and bid-ask spread for an asset, while usually trying to stay neutral about the direction the asset moves.
Delta-gamma hedging: The process Mr. MM uses to stay neutral when selling you shitty OTM options, by buying/selling shares (usually) of the underlying as the price moves.
Law of Surprise [Lily-ism]: Effectively, the expected profit of an options trade is zero for both the seller and the buyer.
Random Walk: A special case of a deeper probability probability called a martingale, which basically models stocks or similar phenomena randomly moving every step they take (for stocks, roughly every millisecond). This is one of the most popular views of how stock prices move, especially on short timescales.
Future Expected Payoff Function [Lily-ism]: This is some hidden function that every market participant has about an asset, which more or less models all the possible future probabilities/values of the assets to arrive at a "fair market price". This is a more generalized case of a pricing model like Black-Scholes, or DCF.
Counter-party: The opposite side of your trade (if you sell an option, they buy it; if you buy an option, they sell it).
Price decoherence ]Lily-ism]: A more generalized notion of IV Crush, price decoherence happens when instead of the FEPF changing gradually over time (price formation), the FEPF rapidly changes, due usually to new information being added to the system (e.g. Vermin Supreme winning the 2020 election).
---
One of the most popular gambling events for option traders to play is earnings announcements, and I do owe the concept of NOPE to hypothesizing specifically about the behavior of stock prices at earnings. Much like a black hole in quantum mechanics, most conventional theories about how price should work rapidly break down briefly before, during, and after ER, and generally experienced traders tend to shy away from playing earnings, given their similar unpredictability.
Before we start: what is NOPE? NOPE is a funny backronym from Net Options Pricing Effect, which in its most basic sense, measures the impact option delta has on the underlying price, as compared to share price. When I first started investigating NOPE, I called it OPE (options pricing effect), but NOPE sounds funnier.
The formula for it is dead simple, but I also have no idea how to do LaTeX on reddit, so this is the best I have:

https://preview.redd.it/ais37icfkwt51.png?width=826&format=png&auto=webp&s=3feb6960f15a336fa678e945d93b399a8e59bb49
Since I've already encountered this, put delta in this case is the absolute value (50 delta) to represent a put. If you represent put delta as a negative (the conventional way), do not subtract it; add it.
To keep this simple for the non-mathematically minded: the NOPE today is equal to the weighted sum (weighted by volume) of the delta of every call minus the delta of every put for all options chains extending from today to infinity. Finally, we then divide that number by the # of shares traded today in the market session (ignoring pre-market and post-market, since options cannot trade during those times).
Effectively, NOPE is a rough and dirty way to approximate the impact of delta-gamma hedging as a function of share volume, with us hand-waving the following factors:
  1. To keep calculations simple, we assume that all counter-parties are hedged. This is obviously not true, especially for idiots who believe theta ganging is safe, but holds largely true especially for highly liquid tickers, or tickers will designated market makers (e.g. any ticker in the NASDAQ, for instance).
  2. We assume that all hedging takes place via shares. For SPY and other products tracking the S&P, for instance, market makers can actually hedge via futures or other options. This has the benefit for large positions of not moving the underlying price, but still makes up a fairly small amount of hedges compared to shares.

Winding and Unwinding

I briefly touched on this in a past post, but two properties of NOPE seem to apply well to EER-like behavior (aka any binary catalyst event):
  1. NOPE measures sentiment - In general, the options market is seen as better informed than share traders (e.g. insiders trade via options, because of leverage + easier to mask positions). Therefore, a heavy call/put skew is usually seen as a bullish sign, while the reverse is also true.
  2. NOPE measures system stability
I'm not going to one-sentence explain #2, because why say in one sentence what I can write 1000 words on. In short, NOPE intends to measure sensitivity of the system (the ticker) to disruption. This makes sense, when you view it in the context of delta-gamma hedging. When we assume all counter-parties are hedged, this means an absolutely massive amount of shares get sold/purchased when the underlying price moves. This is because of the following:
a) Assume I, Mr. MM sell 1000 call options for NKLA 25C 10/23 and 300 put options for NKLA 15p 10/23. I'm just going to make up deltas because it's too much effort to calculate them - 30 delta call, 20 delta put.
This implies Mr. MM needs the following to delta hedge: (1000 call options * 30 shares to buy for each) [to balance out writing calls) - (300 put options * 20 shares to sell for each) = 24,000 net shares Mr. MM needs to acquire to balance out his deltas/be fully neutral.
b) This works well when NKLA is at $20. But what about when it hits $19 (because it only can go down, just like their trucks). Thanks to gamma, now we have to recompute the deltas, because they've changed for both the calls (they went down) and for the puts (they went up).
Let's say to keep it simple that now my calls are 20 delta, and my puts are 30 delta. From the 24,000 net shares, Mr. MM has to now have:
(1000 call options * 20 shares to have for each) - (300 put options * 30 shares to sell for each) = 11,000 shares.
Therefore, with a $1 shift in price, now to hedge and be indifferent to direction, Mr. MM has to go from 24,000 shares to 11,000 shares, meaning he has to sell 13,000 shares ASAP, or take on increased risk. Now, you might be saying, "13,000 shares seems small. How would this disrupt the system?"
(This process, by the way, is called hedge unwinding)
It won't, in this example. But across thousands of MMs and millions of contracts, this can - especially in highly optioned tickers - make up a substantial fraction of the net flow of shares per day. And as we know from our desk example, the buying or selling of shares directly changes the price of the stock itself.
This, by the way, is why the NOPE formula takes the shape it does. Some astute readers might notice it looks similar to GEX, which is not a coincidence. GEX however replaces daily volume with open interest, and measures gamma over delta, which I did not find good statistical evidence to support, especially for earnings.
So, with our example above, why does NOPE measure system stability? We can assume for argument's sake that if someone buys a share of NKLA, they're fine with moderate price swings (+- $20 since it's NKLA, obviously), and in it for the long/medium haul. And in most cases this is fine - we can own stock and not worry about minor swings in price. But market makers can't* (they can, but it exposes them to risk), because of how delta works. In fact, for most institutional market makers, they have clearly defined delta limits by end of day, and even small price changes require them to rebalance their hedges.
This over the whole market adds up to a lot shares moving, just to balance out your stupid Robinhood YOLOs. While there are some tricks (dark pools, block trades) to not impact the price of the underlying, the reality is that the more options contracts there are on a ticker, the more outsized influence it will have on the ticker's price. This can technically be exactly balanced, if option put delta is equal to option call delta, but never actually ends up being the case. And unlike shares traded, the shares representing the options are more unstable, meaning they will be sold/bought in response to small price shifts. And will end up magnifying those price shifts, accordingly.

NOPE and Earnings

So we have a new shiny indicator, NOPE. What does it actually mean and do?
There's much literature going back to the 1980s that options markets do have some level of predictiveness towards earnings, which makes sense intuitively. Unlike shares markets, where you can continue to hold your share even if it dips 5%, in options you get access to expanded opportunity to make riches... and losses. An options trader betting on earnings is making a risky and therefore informed bet that he or she knows the outcome, versus a share trader who might be comfortable bagholding in the worst case scenario.
As I've mentioned largely in comments on my prior posts, earnings is a special case because, unlike popular misconceptions, stocks do not go up and down solely due to analyst expectations being meet, beat, or missed. In fact, stock prices move according to the consensus market expectation, which is a function of all the participants' FEPF on that ticker. This is why the price moves so dramatically - even if a stock beats, it might not beat enough to justify the high price tag (FSLY); even if a stock misses, it might have spectacular guidance or maybe the market just was assuming it would go bankrupt instead.
To look at the impact of NOPE and why it may play a role in post-earnings-announcement immediate price moves, let's review the following cases:
  1. Stock Meets/Exceeds Market Expectations (aka price goes up) - In the general case, we would anticipate post-ER market participants value the stock at a higher price, pushing it up rapidly. If there's a high absolute value of NOPE on said ticker, this should end up magnifying the positive move since:
a) If NOPE is high negative - This means a ton of put buying, which means a lot of those puts are now worthless (due to price decoherence). This means that to stay delta neutral, market makers need to close out their sold/shorted shares, buying them, and pushing the stock price up.
b) If NOPE is high positive - This means a ton of call buying, which means a lot of puts are now worthless (see a) but also a lot of calls are now worth more. This means that to stay delta neutral, market makers need to close out their sold/shorted shares AND also buy more shares to cover their calls, pushing the stock price up.
2) Stock Meets/Misses Market Expectations (aka price goes down) - Inversely to what I mentioned above, this should push to the stock price down, fairly immediately. If there's a high absolute value of NOPE on said ticker, this should end up magnifying the negative move since:
a) If NOPE is high negative - This means a ton of put buying, which means a lot of those puts are now worth more, and a lot of calls are now worth less/worth less (due to price decoherence). This means that to stay delta neutral, market makers need to sell/short more shares, pushing the stock price down.
b) If NOPE is high positive - This means a ton of call buying, which means a lot of calls are now worthless (see a) but also a lot of puts are now worth more. This means that to stay delta neutral, market makers need to sell even more shares to keep their calls and puts neutral, pushing the stock price down.
---
Based on the above two cases, it should be a bit more clear why NOPE is a measure of sensitivity to system perturbation. While we previously discussed it in the context of magnifying directional move, the truth is it also provides a directional bias to our "random" walk. This is because given a price move in the direction predicted by NOPE, we expect it to be magnified, especially in situations of price decoherence. If a stock price goes up right after an ER report drops, even based on one participant deciding to value the stock higher, this provides a runaway reaction which boosts the stock price (due to hedging factors as well as other participants' behavior) and inures it to drops.

NOPE and NOPE_MAD

I'm going to gloss over this section because this is more statistical methods than anything interesting. In general, if you have enough data, I recommend using NOPE_MAD over NOPE. While NOPE in theory represents a "real" quantity (net option delta over net share delta), NOPE_MAD (the median absolute deviation of NOPE) does not. NOPE_MAD simply answecompare the following:
  1. How exceptional is today's NOPE versus historic baseline (30 days prior)?
  2. How do I compare two tickers' NOPEs effectively (since some tickers, like TSLA, have a baseline positive NOPE, because Elon memes)? In the initial stages, we used just a straight numerical threshold (let's say NOPE >= 20), but that quickly broke down. NOPE_MAD aims to detect anomalies, because anomalies in general give you tendies.
I might add the formula later in Mathenese, but simply put, to find NOPE_MAD you do the following:
  1. Calculate today's NOPE score (this can be done end of day or intraday, with the true value being EOD of course)
  2. Calculate the end of day NOPE scores on the ticker for the previous 30 trading days
  3. Compute the median of the previous 30 trading days' NOPEs
  4. From the median, find the 30 days' median absolute deviation (https://en.wikipedia.org/wiki/Median_absolute_deviation)
  5. Find today's deviation as compared to the MAD calculated by: [(today's NOPE) - (median NOPE of last 30 days)] / (median absolute deviation of last 30 days)
This is usually reported as sigma (σ), and has a few interesting properties:
  1. The mean of NOPE_MAD for any ticker is almost exactly 0.
  2. [Lily's Speculation's Speculation] NOPE_MAD acts like a spring, and has a tendency to reverse direction as a function of its magnitude. No proof on this yet, but exploring it!

Using the NOPE to predict ER

So the last section was a lot of words and theory, and a lot of what I'm mentioning here is empirically derived (aka I've tested it out, versus just blabbered).
In general, the following holds true:
  1. 3 sigma NOPE_MAD tends to be "the threshold": For very low NOPE_MAD magnitudes (+- 1 sigma), it's effectively just noise, and directionality prediction is low, if not non-existent. It's not exactly like 3 sigma is a play and 2.9 sigma is not a play; NOPE_MAD accuracy increases as NOPE_MAD magnitude (either positive or negative) increases.
  2. NOPE_MAD is only useful on highly optioned tickers: In general, I introduce another parameter for sifting through "candidate" ERs to play: option volume * 100/share volume. When this ends up over let's say 0.4, NOPE_MAD provides a fairly good window into predicting earnings behavior.
  3. NOPE_MAD only predicts during the after-market/pre-market session: I also have no idea if this is true, but my hunch is that next day behavior is mostly random and driven by market movement versus earnings behavior. NOPE_MAD for now only predicts direction of price movements right between the release of the ER report (AH or PM) and the ending of that market session. This is why in general I recommend playing shares, not options for ER (since you can sell during the AH/PM).
  4. NOPE_MAD only predicts direction of price movement: This isn't exactly true, but it's all I feel comfortable stating given the data I have. On observation of ~2700 data points of ER-ticker events since Mar 2019 (SPY 500), I only so far feel comfortable predicting whether stock price goes up (>0 percent difference) or down (<0 price difference). This is +1 for why I usually play with shares.
Some statistics:
#0) As a baseline/null hypothesis, after ER on the SPY500 since Mar 2019, 50-51% price movements in the AH/PM are positive (>0) and ~46-47% are negative (<0).
#1) For NOPE_MAD >= +3 sigma, roughly 68% of price movements are positive after earnings.
#2) For NOPE_MAD <= -3 sigma, roughly 29% of price movements are positive after earnings.
#3) When using a logistic model of only data including NOPE_MAD >= +3 sigma or NOPE_MAD <= -3 sigma, and option/share vol >= 0.4 (around 25% of all ERs observed), I was able to achieve 78% predictive accuracy on direction.

Caveats/Read This

Like all models, NOPE is wrong, but perhaps useful. It's also fairly new (I started working on it around early August 2020), and in fact, my initial hypothesis was exactly incorrect (I thought the opposite would happen, actually). Similarly, as commenters have pointed out, the timeline of data I'm using is fairly compressed (since Mar 2019), and trends and models do change. In fact, I've noticed significantly lower accuracy since the coronavirus recession (when I measured it in early September), but I attribute this mostly to a smaller date range, more market volatility, and honestly, dumber option traders (~65% accuracy versus nearly 80%).
My advice so far if you do play ER with the NOPE method is to use it as following:
  1. Buy/short shares approximately right when the market closes before ER. Ideally even buying it right before the earnings report drops in the AH session is not a bad idea if you can.
  2. Sell/buy to close said shares at the first sign of major weakness (e.g. if the NOPE predicted outcome is incorrect).
  3. Sell/buy to close shares even if it is correct ideally before conference call, or by the end of the after-market/pre-market session.
  4. Only play tickers with high NOPE as well as high option/share vol.
---
In my next post, which may be in a few days, I'll talk about potential use cases for SPY and intraday trends, but I wanted to make sure this wasn't like 7000 words by itself.
Cheers.
- Lily
submitted by the_lilypad to thecorporation [link] [comments]

The Challenges of Designing a Modern Skill, Part 3

Okay, Wendy’s or Walgreens or whoever, I don’t care who you are, you’re listening to the rest.

Introduction to Part 3

Welcome back one last time to “The Challenges of Designing a Modern Skill,” a series where we discuss all aspects of skill design and development. In Part 1, we talked about OSRS’s history with skills, and started the lengthy conversation on Skill Design Philosophy, including the concepts of Core, Expansion, and Integration. This latter topic consumed the entirety of Part 2 as well, which covered Rewards and Motivations, Progression, Buyables, as well as Unconstructive Arguments.
Which brings us to today, the final part of our discussion. In this Part 3, we’ll finish up Section 3 – Skill Design Philosophy, then move on to chat about the design and blog process. One last time, this discussion was intended to be a single post, but its length outgrew the post character limit twice. Therefore, it may be important to look at the previous two parts for clarity and context with certain terms. The final product, in its purest, aesthetic, and unbroken form, can be found here.

3-C – Skill Design Philosophy, Continued

3-12 - Balancing

What follows from the discussion about XP and costs, of course, is balancing: the bane of every developer. A company like Riot knows better than anyone that having too many factors to account for makes good balance impossible. Balancing new ideas appropriately is extremely challenging and requires a great respect for current content as discussed in Section 3-5 – Integration. Thankfully, in OSRS we only have three major balancing factors: Profit, XP Rate, and Intensity, and two minor factors: Risk and Leniency. These metrics must amount to some sense of balance (besides Leniency, which as we’ll see is the definition of anti-balance) in order for a piece of content to feel like it’s not breaking the system or rendering all your previous efforts meaningless. It’s also worthy to note that there is usually a skill-specific limit to the numerical values of these metrics. For example, Runecrafting will never receive a training method that grants 200k xp/hr, while for Construction that’s easily on the lower end of the scale.
A basic model works better than words to describe these factors, and therefore, being the phenomenal artist that I am, I have constructed one, which I’ve dubbed “The Guthix Scale.” But I’ll be cruel and use words anyway.
  • Profit: how much you gain from a task, or how much you lose. Gain or loss can include resources, cosmetics, specialized currencies, good old gold pieces, or anything on that line.
  • XP Rate: how fast you gain XP.
  • Intensity: how much effort (click intensity), attention (reaction intensity), and thought (planning intensity) you need to put into the activity to perform it well.
  • Risk: how likely is the loss of your revenue and/or resource investment into the activity. Note that one must be careful with risk, as players are very good at abusing systems intended to encourage higher risk levels to minimize how much they’re actually risking.
  • Leniency: a measure for how imbalanced a piece of content can be before the public and/or Jagex nerfs it. Leniency serves as a simple modulator to help comprehend when the model breaks or bends in unnatural ways, and is usually determined by how enjoyable and abusable an activity is, such that players don’t want to cause an outrage over it. For example, Slayer has a high level of Leniency; people don’t mind that some Slayer tasks grant amazing XP Rates, great Profits, have middling Intensity, and low Risk. On the other hand, Runecrafting has low levels of Leniency; despite low Risk, many Runecrafting activities demand high Intensity for poor XP Rates and middling Profits.
In the end, don’t worry about applying specific numbers during the conceptual phase of your skill design. However, when describing an activity to your reader, it’s always useful if you give approximations, such as “high intensity” or “low risk,” so that they get an idea of the activity’s design goals as well as to guide the actual development of that activity. Don’t comment on the activity’s Leniency though, as that would be pretty pretentious and isn’t for you to determine anyway.

3-13 - Skill Bloat

What do the arts of weaving, tanning, sowing, spinning, pottery, glassmaking, jewellery, engraving, carving, chiselling, carpentry, and even painting have in common? In real life, there’s only so much crossover between these arts, but in Runescape they’re all simply Crafting.
The distinction between what deserves to be its own skill or instead tagged along to a current skill is often arbitrary; this is the great challenge of skill bloat. The fundamental question for many skill concepts is: does this skill have enough depth to stand on its own? The developers of 2006 felt that there was sufficient depth in Construction to make it something separate from Crafting, even if the latter could have covered the former. While there’s often no clean cut between these skills (why does making birdhouses use Crafting instead of Construction?), it is easy to see that Construction has found its own solid niche that would’ve been much too big to act as yet another Expansion of Crafting.
On the other hand, a skill with extremely limited scope and value perhaps should be thrown under the umbrella of a larger skill. Take Firemaking: it’s often asked why it deserves to be its own skill given how limited its uses are. This is one of those ideas that probably should have just been thrown under Crafting or even Woodcutting. But again, the developers who made early Runescape did not battle with the same ideas as the modern player; they simply felt like Firemaking was a good idea for a skill. Similarly, the number of topics that the Magic skill covers is so often broken down in other games, like Morrowind’s separation between Illusion, Conjuration, Alteration, Destruction, Mysticism, Restoration, Enchant, Alchemy (closer to Herblore), and Unarmored (closer to Strength and Defense). Why does Runescape not break Magic into more skills? The answer is simple: Magic was created with a much more limited scope in Runescape, and there has not been enough content in any specific magical category to justify another skill being born. But perhaps your skill concept seeks to address this; maybe your Enchantment skill takes the enchanting aspects of Magic away, expands the idea to include current imbues and newer content, and fully fleshes the idea out such that the Magic skill alone cannot contain it. Somewhat ironically, Magic used to be separated into Good and Evil Magic skills in Runescape Classic, but that is another topic.
So instead of arguments about what could be thrown under another skill’s umbrella, perhaps we should be asking: is there enough substance to this skill concept for it to stand on its own, outside of its current skill categorization? Of course, this leads to a whole other debate about how much content is enough for a skill idea to deserve individuality, but that would get too deep into specifics and is outside the scope of this discussion.

3-14 - Skill Endgame

Runescape has always been a sandbox MMO, but the original Runescape experience was built more or less with a specific endgame in mind: killing players and monsters. Take the Runescape Classic of 2001: you had all your regular combat skills, but even every other skill had an endgame whose goal was helping combat out. Fishing, Firemaking, and Cooking would provide necessary healing. Smithing and Crafting, along with their associated Gathering skill partners, served to gear you up. Combat was the simple endgame and most mechanics existed to serve that end.
However, since those first days, the changing endgame goals of players have promoted a vast expansion of the endgame goals of new content. For example, hitting a 99 in any non-combat skill is an endgame goal in itself for many players, completely separate from that skill’s combat relationship (if any). These goals have increased to aspects like cosmetic collections, pets, maxed stats, all quests completed, all diaries completed, all music tracks unlocked, a wealthy bank, the collection log, boss killcounts, and more. Whereas skills used to have a distinct part of a system that ultimately served combat, we now have a vast variety of endgame goals that a skill can be directed towards. You can even see a growth in this perspective as new skills were released up to 2007: Thieving mainly nets you valuable (or once valuable) items which have extremely flexible uses, and Construction has a strong emphasis on cosmetics for your POH.
So when designing your new skill, contemplate what the endgame of your skill looks like. For example, if you are proposing a Gathering skill, what is the Production skill tie-in, and what is the endgame goal of that Production skill? Maybe your new skill Spelunking has an endgame in gathering rare collectibles that can be shown off in your POH. Maybe your new skill Necromancy functions like a Support skill, giving you followers that help speed along resource gathering, and letting you move faster to the endgame goal of the respective Production skill. Whatever it is, a proper, clear, and unified view of an endgame goal helps a skill feel like it serves a distinct and valuable purpose. Note that this could mean that you require multiple skills to be released simultaneously for each to feed into each other and form an appropriate endgame. In that case, go for it – don’t make it a repeat of RS3’s Divination, a Gathering skill left hanging without the appropriate Production skill partner of Invention for over 2 years.
A good example of a skill with a direct endgame is… most of them. Combat is a well-accepted endgame, and traditionally, most skills are intended to lend a hand in combat whether by supplies or gear. A skill with a poor endgame would be Hunter: Hunter is so scattered in its ultimate endgame goals, trying to touch on small aspects of everything like combat gear, weight reduction, production, niche skilling tools, and food. There’s a very poor sense of identity to Hunter’s endgame, and it doesn’t help that very few of these rewards are actually viable or interesting in the current day. Similarly, while Slayer has a strong endgame goal it is terrible in its methodology, overshadowing other Production skills in their explicit purpose. A better design for Slayer’s endgame would have been to treat it as a secondary Gathering skill, to work almost like a catalyst for other Gathering-Production skill relationships. In this mindset, Slayer is where you gather valuable monster drops, combine it with traditional Gathering resources like ores from Mining, then use a Production skill like Smithing to meld them into the powerful gear that is present today. This would have kept other Gathering and Production skills at the forefront of their specialities, in contrast to today’s situation where Slayer will give fully assembled gear that’s better than anything you could receive from the appropriate skills (barring a few items that need a Production skill to piece together).

3-15 - Alternate Goals

From a game design perspective, skills are so far reaching that it can be tempting to use them to shift major game mechanics to a more favourable position. Construction is an example of this idea in action: Construction was very intentionally designed to be a massive gold sink to help a hyperinflating economy. Everything about it takes gold out of the game, whether through using a sawmill, buying expensive supplies from stores, adding rooms, or a shameless piece of furniture costing 100m that is skinned as, well, 100m on a shameless piece of furniture.
If you’re clever about it, skills are a legitimately good opportunity for such change. Sure, the gold sink is definitely a controversial feature of Construction, but for the most part it’s organic and makes sense; fancy houses and fancy cosmetics are justifiably expensive. It is notable that the controversy over Construction’s gold sink mechanism is probably levied more against the cost of training, rather than the cost of all its wonderful aesthetics. Perhaps that should have been better accounted for in its design phase, but now it is quite set in stone.
To emphasize that previous point: making large scale changes to the game through a new skill can work, but it must feel organic and secondary to the skill’s main purpose. Some people really disliked Warding because they felt it tried too hard to fix real, underlying game issues with mechanics that didn’t thematically fit or were overshadowing the skill’s Core. While this may or may not be true, if your new skill can improve the game’s integrity without sacrificing its own identity, you could avoid this argument entirely. If your skill Regency has a Core of managing global politics, but also happens to serve as a resource sink to help your failing citizens, then you’ve created a strong Core design while simultaneously improving the profitability of Gathering skills.

3-16 - The Combat No-Touch Rule

So, let’s take a moment to examine the great benefits and rationale of RS2’s Evolution of Combat:
This space has been reserved for unintelligible squabbling.
With that over, it’s obvious that the OSRS playerbase is not a big fan of making major changes to the combat system. If there’s anything that defines the OSRS experience, it has to be the janky and abusable combat system that we love. So, in the past 7 years of OSRS, how many times have you heard someone pitch a new combat skill? Practically no one ever has; a new combat skill, no matter how miniscule, would feel obtrusive to most players, and likely would not even receive 25% of votes in a poll. This goes right back to Section 3-5 – Integration, and the importance of preserving the fundamentals of OSRS’s design.
I know that my intention with this discussion was to be as definitive about skill design as possible, and in that spirit I should be delving into the design philosophy specifically behind combat skills, but I simply don’t see the benefit of me trying, and the conversation really doesn’t interest me that much. It goes without saying that as expansive as this discussion is, it does not cover every facet of skill design, which is a limitation both of my capabilities and desire to do so.

3-17 - Aesthetics

I don’t do aesthetics well. I like them, I want them, but I do not understand them; there are others much better equipped to discuss this topic than I. Nonetheless, here we go.
Since the dawn of OSRS, debates over art style and aesthetics have raged across Gielinor. After all, the OSRS Team is filled with modern day artists while OSRS is an ancient game. What were they supposed to do? Keep making dated graphics? Make content with a modernized and easily digestible style? Something in-between?
While many players shouted for more dated graphics, they were approached by an interesting predicament: which dated graphics did they want? We had a great selection present right from the start of OSRS: 2002, 2003, 2004, 2005, 2006, and 2007. People hungry for nostalgia chose the era that they grew up in, leading to frequent requests for older models like the dragon or imp, most of which were denied by Jagex (except the old Mining rock models). But which era was OSRS supposed to follow?
Jagex elected to carve their own path, but not without heavy criticism especially closer to OSRS’s conception. However, they adapted to player requests and have since gone back and fixed many of the blatant early offenders (like the Kingdom of Kourend) and adopted a more consistent flavour, one that generally respects the art style of 2007. Even though it doesn’t always hit the mark, one has to appreciate the OSRS artists for making their best attempt and listening to feedback, and here’s to hoping that their art style examination mentioned in June 2020’s Gazette bears fruit.
But what exactly is the old school art style? There are simple systems by which most players judge it in OSRS, usually by asking questions like, “Would you believe if this existed in 2007?” More informed artists will start pointing out distinct features that permeated most content from back in the day, such as low quality textures, low poly models, low FPS animations, a “low fantasy” or grounded profile that appeals somewhat to realism, reducing cartoonish exaggerations, and keeping within the lore. Compiled with this, music and sound design help that art style come to life; it can be very hard on immersion when these don’t fit. An AGS would sound jarring if its special attack sounded like a weak dagger stab, and having to endure Country Jig while roaming Hosidius suddenly sweeps you off into a different universe.
But coming back to skill design, the art, models, and sound design tend to be some of the last features, mostly because the design phase doesn’t demand such a complete picture of a skill. However, simple concept art and models can vastly improve how a skill concept is communicated and comfort players who are concerned about maintaining that “old school feel.” This will be touched on again later in this discussion under Section 5-2 – Presentation and Beta Testing.

3-18 - Afterword

Now we’ve set down the modern standards for a new skill, but the statements that started this section bear repeating: the formula we’ve established does not automatically make a good or interesting skill, as hard as we might have tried. Once again, harken back to the First Great Irony: that we are trying to inject the modern interpretation of what defines a skill upon a game that was not necessarily built to contain it. Therefore, one could just as easily deny each of the components described above, as popular or unpopular as the act might be, and their opinion could be equally valid and all this effort meaningless. Don’t take these guidelines with such stringency as to disregard all other views.

5-0 - The OSRS Team and the Design Process

If you’ve followed me all the way here, you’re likely A) exhausted and fed up of any conversation concerning new skills, or B) excited, because you’ve just struck an incredible skill idea (or perhaps one that’s always hung around your head) that happens to tick off all the above checkboxes. But unfortunately for you B types, it’s about to get pretty grim, because we’re going to go through every aspect of skill design that’s exterior to the game itself. We’ll be touching on larger topics like democracy, presentation, player mindsets, effort, and resource consumption. It’ll induce a fantastic bout of depression, so don’t get left behind.

5-1 - Designing a Skill

Thus far, Jagex has offered three potential skills to OSRS, each of which has been denied. This gives us the advantage of understanding how the skill design process works behind the scenes and lets us examine some of the issues Jagex has faced with presenting a skill to the players.
The first problem is the “one strike and you’re out” phenomenon. Simply put, players don’t like applying much effort into reading and learning. They’ll look at a developer blog highlighting a new skill idea, and if you’re lucky they’ll even read the whole thing, but how about the second developer blog? The third? Fourth? Even I find it hard to get that far. In general, people don’t like long detail-heavy essays or blogs, which is why I can invoke the ancient proverb “Ban Emily” into this post and it’ll go (almost) completely unnoticed. No matter how many improvements you make between developer blogs, you will quickly lose players with each new iteration. Similarly, developer blogs don’t have the time to talk about skill design philosophy or meta-analyse their ideas – players would get lost far too fast. This is the Second Great Irony of skill design: the more iterations you have of a lengthy idea, the less players will keep up with you.
This was particularly prominent with Warding: Battle Wards were offered in an early developer blog but were quickly cut when Jagex realized how bad the idea was. Yet people would still cite Battle Wards as the reason they voted against Warding, despite the idea having been dropped several blogs before. Similarly, people would often comment that they hated that Warding was being polled multiple times; it felt to them like Jagex was trying to brute-force it into the game. But Warding was only ever polled once, and only after the fourth developer blog - the confusion was drawn from how many times the skill was reiterated and from the length of the public design process. Sure, there are people for whom this runs the opposite way; they keep a close eye on updates and judge a piece of content on the merits of the latest iteration, but this is much less common. You could argue that one should simply disregard the ignorant people as blind comments don't contribute to the overall discussion, but you should remember that these players are also the ones voting for the respective piece of content. You could also suggest re-educating them, which is exactly what Jagex attempts with each developer blog, and still people won’t get the memo. And when it comes to the players themselves, can the playerbase really be relied on to re-educate itself?
Overall, the Second Great irony really hurts the development process and is practically an unavoidable issue. What’s the alternative? To remove the developer-player interface that leads to valuable reiterations, or does you simply have to get the skill perfect in the first developer blog?
It’s not an optimal idea, but it could help: have a small team of “delegates” – larger names that players can trust, or player influencers – come in to review a new, unannounced skill idea under NDA. If they like it, chances are that other players will too. If they don’t, reiterate or toss out the skill before it’s public. That way, you’ve had a board of experienced players who are willing to share their opinions to the public helping to determine the meat and potatoes of the skill before it is introduced to the casual eye. Now, a more polished and well-accepted product can be presented on the first run of selling a skill to the public, resulting in less reiterations being required, and demanding less effort from the average player to be fully informed over the skill’s final design.

5-2 - Presentation and Beta Testing

So you’ve got a great idea, but how are you going to sell it to the public? Looking at how the OSRS Team has handled it throughout the years, there’s a very obvious learning curve occurring. Artisan had almost nothing but text blogs being thrown to the players, Sailing started introducing some concept art and even a trailer with terrible audio recording, and Warding had concept art, in game models, gifs, and a much fancier trailer with in-game animations. A picture or video is worth a thousand words, and often the only words that players will take out of a developer blog.
You might say that presentation is everything, and that would be more true in OSRS than most games. Most activities in OSRS are extremely basic, involve minimal thought, and are incredibly grindy. Take Fishing: you click every 20 seconds on a fishing spot that is randomly placed along a section of water, get rid of your fish, then keep clicking those fishing spots. Boiling it down further, you click several arbitrary parts of your computer screen every 20 seconds. It’s hardly considered engaging, so why do some people enjoy it? Simply put: presentation. You’re given a peaceful riverside environment to chill in, you’re collecting a bunch of pixels shaped like fish, and a number tracking your xp keeps ticking up and telling you that it matters.
Now imagine coming to the players with a radical new skill idea: Mining. You describe that Mining is where you gather ores that will feed into Smithing and help create gear for players to use. The audience ponders momentarily, but they’re not quite sure it feels right and ask for a demonstration. You show them some gameplay, but your development resources were thin and instead of rocks, you put trees as placeholders. Instead of ores in your inventory, you put logs as placeholders. Instead of a pickaxe, your character is swinging a woodcutting axe as a placeholder. Sure, the mechanics might act like mining instead of woodcutting, but how well is the skill going to sell if you haven’t presented it correctly or respected it contextually?
Again, presentation is everything. Players need to be able to see the task they are to perform, see the tools they’ll use, and see the expected outcomes; otherwise, whatever you’re trying to sell will feel bland and unoriginal. And this leads to the next level of skill presentation that has yet to be employed: Beta Worlds.
Part of getting the feel of an activity is not just watching, it but acting it out as well - you’ll never understand the thrill of skydiving unless you’ve actually been skydiving. Beta Worlds are that chance for players to act out a concept without risking the real game’s health. A successful Beta can inspire confidence in players that the skill has a solid Core and interesting Expansions, while a failed Beta will make them glad that they got to try it and be fully informed before putting the skill to a poll (although that might be a little too optimistic for rage culture). Unfortunately, Betas are not without major disadvantages, the most prominent of which we shall investigate next.

5-3 - Development Effort

If you thought that the previous section on Skill Design Philosophy was lengthy and exhausting, imagine having to know all that information and then put it into practice. Mentally designing a skill in your head can be fun, but putting all that down on paper and making it actually work together, feel fully fleshed out, and following all the modern standards that players expect is extremely heavy work, especially when it’s not guaranteed to pay off in the polls like Quest or Slayer content. That’s not even taking into account the potentially immense cost of developing a new skill should it pass a poll.
Whenever people complain that Jagex is wasting their resources trying to make that specific skill work, Jagex has been very explicit about the costs to pull together a design blog being pretty minimal. Looking at the previous blogs, Jagex is probably telling the truth. It’s all just a bunch of words, a couple art sketches, and maybe a basic in-game model or gif. Not to downplay the time it takes to write well, design good models, or generate concept art, but it’s nothing like the scale of resources that some players make it out to be. Of course, if a Beta was attempted as suggested last section, this conversation would take a completely new turn, and the level of risk to invested resources would exponentially increase. But this conversation calls to mind an important question: how much effort and resources do skills require to feel complete?
Once upon a time, you could release a skill which was more or less unfinished. Take Slayer: it was released in 2005 with a pretty barebones structure. The fundamentals were all there, but the endgame was essentially a couple cool best-in-slot weapons and that was it. Since then, OSRS has updated the skill to include a huge Reward Shop system, feature 50% more monsters to slay, and to become an extremely competitive money-maker. Skills naturally undergo development over time, but it so often comes up during the designing of an OSRS skill that it "doesn't have enough to justify its existence." This was touched on deeply in Section 3-13 – Skill Bloat, but deserves reiterating here. While people recognize that skills continually evolve, the modern standard expects a new skill, upon release, to be fully preassembled before purchase. Whereas once you could get away with releasing just a skill's Core and working on Expansions down the line, that is no longer the case. But perhaps a skill might stand a better chance now than it did last year, given that the OSRS Team has doubled in number since that time.
However, judging from the skill design phases that have previously been attempted (as we’ve yet to see a skill development phase), the heaviest cost has been paid in developer mentality and motivational loss. When a developer is passionate about an idea, they spend their every waking hour pouring their mind into how that idea is going to function, especially while they’re not at work. And then they’re obligated to take player feedback and adapt their ideas, sometimes starting from scratch, particularly over something as controversial as a skill. Even if they have tough enough skin to take the heavy criticism that comes with skill design, having to write and rewrite repeatedly over the same idea to make it “perfect” is mentally exhausting. Eventually, their motivation drains as their labour bears little fruit with the audience, and they simply want to push it to the poll and be done with it. Even once all their cards are down, there’s still no guarantee that their efforts will be rewarded, even less so when it comes to skills.
With such a high mental cost with a low rate of success, you have to ask, “Was it worth it?” And that’s why new skill proposals are far and few between. A new skill used to be exciting for the development team in the actual days of 2007, as they had the developmental freedom to do whatever they wanted, but in the modern day that is not so much the case.

5-4 - The Problems of Democracy

Ever since the conceptualization of democracy in the real world, people have been very aware of its disadvantages. And while I don’t have the talent, knowledge, or time to discuss every one of these factors, there are a few that are very relevant when it comes to the OSRS Team and the polling process.
But first we should recognize the OSRS Team’s relationship with the players. More and more, the Team acts like a government to its citizens, the players, and although this situation was intentionally instated with OSRS’s release, it’s even more prominent now. The Team decides the type of content that gets to go into a poll, and the players get their input over whether that particular piece makes it in. Similarly, players make suggestions to the Team that, in many cases, the Team hadn’t thought of themselves. This synergy is phenomenal and almost unheard of among video games, but the polling system changes the mechanics of this relationship.
Polls were introduced to the burned and scarred population of players at OSRS’s release in 2013. Many of these players had just freshly come off RS2 after a series of disastrous updates or had quit long before from other controversies. The Squeal of Fortune, the Evolution of Combat, even the original Wilderness Removal had forced numerous players out and murdered their trust in Jagex. To try and get players to recommit to Runescape, Jagex offered OSRS a polling system by which the players would determine what went into the game, where the players got to hold all the cards. They also asked the players what threshold should be required for polled items to pass, and among the odd 50% or 55% being shouted out, the vast majority of players wanted 70%, 75%, 80%, or even 85%. There was a massive population in favour of a conservative game that would mostly remain untouched, and therefore kept pure from the corruption RS2 had previously endured.
Right from the start, players started noticing holes in this system. After all, the OSRS Team was still the sole decider of what would actually be polled in the first place. Long-requested changes took forever to be polled (if ever polled at all) if the OSRS Team didn’t want to deal with that particular problem or didn’t like that idea. Similarly, the Team essentially had desk jobs with a noose kept around their neck – they could perform almost nothing without the players, their slave masters, seeing, criticizing, and tearing out every inch of developmental or visionary freedom they had. Ever hear about the controversy of Erin the duck? Take a look at the wiki or do a search through the subreddit history. It’s pretty fantastic, and a good window into the minds of the early OSRS playerbase.
But as the years have gone on, the perspective of the players has shifted. There is now a much healthier and more trusting relationship between them and the Team, much more flexibility in what the players allow the Team to handle, and a much greater tolerance and even love of change.
But the challenges of democracy haven’t just fallen away. Everyone having the right to vote is a fundamental tenet of the democratic system, but unfortunately that also means that everyone has the right to vote. For OSRS, that means that every member, whether it’s their first day in game, their ten thousandth hour played, those who have no idea about what the poll’s about, those who haven’t read a single quest (the worst group), those who RWT and bot, those who scam and lure, and every professional armchair developer like myself get to vote. In short, no one will ever be perfectly informed on every aspect of the game, or at least know when to skip when they should. Similarly, people will almost never vote in favour of making their game harder, even at the cost of game integrity, or at least not enough people would vote in such a fashion to reach a 75% majority.
These issues are well recognized. The adoption of the controversial “integrity updates” was Jagex’s solution to these problems. In this way, Jagex has become even more like a government to the players. The average citizen of a democratic country cannot and will not make major decisions that favour everyone around themselves if it comes at a personal cost. Rather, that’s one of the major roles of a government: to make decisions for changes for the common good that an individual can’t or won’t make on their own. No one’s going to willingly hand over cash to help repave a road on the opposite side of the city – that’s why taxes are a necessary evil. It’s easy to see that the players don’t always know what’s best for their game and sometimes need to rely on that parent to decide for them, even if it results in some personal loss.
But players still generally like the polls, and Jagex still appears to respect them for the most part. Being the government of the game, Jagex could very well choose to ignore them, but would risk the loss of their citizens to other lands. And there are some very strong reasons to keep them: the players still like having at least one hand on the wheel when it comes to new content or ideas. Also, it acts as a nice veto card should Jagex try to push RS3’s abusive tactics on OSRS and therefore prevent such potential damage.
But now we come to the topic of today: the introduction of a new skill. Essentially, a new skill must pass a poll in order to enter the game. While it’s easy to say, “If a skill idea is good enough, it’ll pass the threshold,” that’s not entirely true. The only skill that could really pass the 75% mark is not necessarily a well-designed skill, but rather a crowd-pleasing skill. While the two aren’t mutually exclusive, the latter is far easier to make than the former. Take Dungeoneering: if you were to poll it today as an exact replica of RS2’s version, it would likely be the highest scoring skill yet, perhaps even passing, despite every criticism that’s been previously emphasized describing why it has no respect for the current definition of “skill.” Furthermore, a crowd-pleasing skill can easily fall prey to deindividualization of vision and result in a bland “studio skill” (in the same vein as a “studio film”), one that feels manufactured by a board of soulless machines rather than a director’s unique creation. This draws straight back to the afore-mentioned issues with democracy: that people A) don’t always understand what they’re voting for or against, and B) people will never vote for something that makes their game tougher or results in no benefit to oneself. Again, these were not issues in the old days of RS2, but are the problems we face with our modern standards and decision making systems.
The reality that must be faced is that the polling system is not an engine of creation nor is it a means of constructive feedback – it’s a system of judgement, binary and oversimplified in its methodology. It’s easy to interact with and requires no more than 10 seconds of a player’s time, a mere mindless moment, to decide the fate of an idea made by an individual or team, regardless of their deep or shallow knowledge of game mechanics, strong or weak vision of design philosophy, great or terrible understanding of the game’s history, and their awareness of blindness towards the modern community. It’s a system which disproportionately boils down the quality of discussion that is necessitated by a skill, which gives it the same significance as the question “Should we allow players to recolour the Rocky pet by feeding it berries?” with the only available answers being a dualistic “This idea is perfect and should be implemented exactly as outlined” or “This idea is terrible and should never be spoken of again.”
So what do you do? Let Jagex throw in whatever they want? Reduce the threshold, or reduce it just for skills? Make a poll that lists a bunch of skills and forces the players to choose one of them to enter the game? Simply poll the question, “Should we have a new skill?” then let Jagex decide what it is? Put more options on the scale of “yes” to “no” and weigh each appropriately? All these options sound distasteful because there are obvious weaknesses to each. But that is the Third Great Irony we face: an immense desire for a new skill, but no realistic means to ever get one.

6-0 - Conclusion

I can only imagine that if you’ve truly read everything up to this point, it’s taken you through quite the rollercoaster. We’ve walked through the history of OSRS skill attempts, unconstructive arguments, various aspects of modern skill design philosophy, and the OSRS Team and skill design process. When you take it all together, it’s easy to get overwhelmed by all the thought that needs to go into a modern skill and all the issues that might prevent its success. Complexity, naming conventions, categorizations, integration, rewards and motivations, bankstanding and buyables, the difficulties of skill bloat, balancing, and skill endgames, aesthetics, the design process, public presentation, development effort, democracy and polling - these are the challenges of designing and introducing modern skills. To have to cope with it all is draining and maybe even impossible, and therefore it begs the question: is trying to get a new skill even worth it?
Maybe.
Thanks for reading.
Tl;dr: Designing a modern skill requires acknowledging the vast history of Runescape, understanding why players make certain criticisms and what exactly they’re saying in terms of game mechanics, before finally developing solutions. Only then can you subject your ideas to a polling system that is built to oversimplify them.
submitted by ScreteMonge to 2007scape [link] [comments]

My computer freezes except when i am monitoring it

Hey, guys, sorry to bother you with this. I usually try to check if there are similar posts, or guides, but o boi. I will try to be detailed, not sure what matters or not, so sorry about that as well. The story is: I bought a computer for gaming last year. No problems at all for the whole time, except up until a month ago. I was playing Sekiro, and sometimes, it would randomly freeze the screen and sometimes continue or distort the audio. The computer freezes until restart. At the 3rd/4th attempt it would run normally as if nothing ever happened. Beat the game while this issue was going, np. After that, i decided to play amnesia: rebirth. Then my pc decided to go all out Johnny Sins on me and would crash every 2 minutes in, no escape. Since I usually try to pirate/configure a thing or two i thought it could be a malware. So I ran every single option of windows defender and malware bytes. One came up from a random game. Deleted it. Tried to repair windows. Followed several guides as for system restoration and scans. Checked for drivers and so on. Problem persisted. Eventually I was working from home and in the middle of it the problem decides to happen again. Crashing the whole computer, but not being able to turn it on correctly until the third reset. Oh, so that was how my journey was going. Windows bitchslaping me out of nowhere. So i slapped back and restored Windows. First saving the files and deleting programs. My computer gave zero fucks about it and the problem persisted. So I summoned my asshat mode and did a full restore. Reinstall windows, delete absolutely everything, clean all units, and pray for our lord and savior Shaggy to overlook the process. Since I am an atheist it didn't work. I installed just the geforce drivers and thought maybe it would run now. Also decided to download a newer version of the game. Guess what? Bingo bango bongo. The computer crashed within two minutes of game. Also crashed on spelunky 2 since I was trying to get angry at something else. Because why not.
By process of elimination I thought it could be the absolute only thing that I installed that was guilty: GeForce Experience and the drivers. Also looked several posts here and elsewhere and it appeared as a possibility. First turned off the grid, but kept it. Game lasted a little longer, still to no avail. Tried alternatively deleting it. Still crashed, but the noise on the computer changed for some reason. The coolers went randomly more active. The same after uninstalling anything related to nvidia. Same mockery from satan. I thought maybe i fucked up by even installing it, so yeap, you guessed it. System restoration again. I could almost listen to Steve Jobs laughing for not buying a mac for 20x the price. Damn you Steve. So i tried just running the game without any new drivers and see whats up. Dlls were missing, manually downloaded them. Still crashing. The random crashes using normal programs stopped after the restorations, so I thought it was something.
I tried checking for logs, crash reports, couldn't find any. So I downloaded a program that would actually look for any valid logs to analyze in case it was even more from my blunt incompetence. I didn't find anything. Even after the computer freezing and crashing with it on. I checked possibilities about bios. Looked up about firmwares, about anything else related to a solution or reason for these events. I ordered some things to actually clean the hardware, as it could be due to dust, or even my tears at this point in time. I am still waiting for it to arrive. Even if it is not the problem I am still in an abusive relationship with my computer and care about it.
Nothing seemed to be working. One possible issue could be overheating for some reason. But since the computer would crash in less then two minutes it seemed very unlikely. All coolers are working in good conditions. But welp. My hope was almost lost. If the cleaning didn't work, something about the hardware may be faulty, despite the computer's age. So i decided to simply go to the task manager. See if anything out of the ordinary was running. Nothing. As I wondered what in tarnation was going on with my life, i said fuck it and tried installing and updating every single driver. Also I decided to dual screen and while I played Amnesia, i would look at the machine's status in the task manager itself. At least the basics: CPU, memory, SSD, GPU, temperature. Also opened the resource monitoring from there. I was at this point looking for a technician, as sheer fucking stupidity and persistence seemed to not be bearing the best fruits.
And then. Just out of fucking nowhere, as a flaming humongous dick coming from the sky straight to my ass. It worked. For absolutely no fucking reason I managed to play for 45 minutes straight with absolutely no problems whatsoever. Was I dreaming? Was this the real life? What was life? I knew no more. But it worked. I slowly walked away hoping that nothing would change until the next day. Maybe if I don't look at it for too long it would smell my fear. Next day, worked normally, watched my classes, sucked at spelunky with zero problems. I was still not trusting this new reality. Something was off. Turned on amnesia. First plank out and my computer went to Neverland. I could almost hear the binary laugh from this little mf. It crashed several times for no reason whatsoever. Then I remembered my glimpse of hope the day before. It was one thousand percent bullshit, but hey, I have no dignity at this point in time. Turned on task manager and resource monitoring. It worked as if nothing wrong ever happened to society.
I was legit going to look for a technician and beg for money at the streets to pay for the repairs. But now it's just past this point. It's a matter of honor. Of values. Of dignity. So I came here to beg all of you good doers to assist me on my quest to understand this fucking bullshit in my life. This just can't be serious. I can't see a single reason why of all things this specific action would cause it to work normally, And I have no clue what else to do.
Thank you very much for your attention.
TL:DR _Computer is less then a year old and I take good care of it _Sometimes pirate programs, but try to look for the safest options very carefully _Computer froze and crashed while playing games (Sekiro, Amnesia: Rebirth, Spelunky 2 [more rarely] _Started crashing on regular programs such as Chrome _Restored the system _Erased every single file and cleaned the disk _Checked for virus (Windows defender, Malware bytes [all options available]) _Checked for issues with the driver itself and Ge force Experience _Crash noise changes after deleting mentioned program and drivers, but still crashes. _Checked bios and firmware versions _Tried with no new drivers, only manually installing missing dlls _Decided to update absolutely every single driver and windows to their latest versions _Downloaded a newer version of the game _Checked for logs _Downloaded a program to check for crashes, which found nothing even while on during a crash _Nothing weird on task manager _No new programs after the recovery (Exceptions: Chrome, Firefox, qBittorrent, Daemon tools lite, DS4 Windows) At none of those instances the problem would be solved _Open task manager to see info on CPU, SSD, GPU and temperature. Also open the Resource monitor The game suddenly works and never crashes again. Problem persists if those windows are closed or only opened during gameplay.
TL:DR of the TL:DR I am in pain, pls help
System configurations:
https://ibb.co/Qd6pst5 System: Windows 10 Pro - 20H2 - x64 Windows Feature Experience Pack 120.2212.31.0
P.S. I really don't know too much as I don't work with IT, so please, if you need any more info, or have any suggestions, I will try to answer as fast as possible. Sorry to cause any bother, and again, thank you for the attention.
submitted by MiddleShort9542 to techsupport [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

How to generate (relative) secure paper wallets and spend them (Newbies)

How to generate (relative) secure paper walletsEveryone is invited to suggest improvements, make it easier, more robust, provide alternativers, comment on what they like or not, and also critizice it.
Also, this is a disclaimer: I'm new to all of this. First, I didn't buy a hardware wallet because they are not produce in my country and I couldnt' trust they are not tampered. So the other way was to generate it myself. (Not your keys not your money) I've instructed myself several weeks reading various ways of generating wallets (including Glacier). As of now, I think this is THE BEST METHOD for a non-technical person which is high security and low cost and not that much lenghty.
FAQs:Why I didn't use Coleman's BIP 39 mnemonic method? Basically, I dont know how to audit the code. As a downside, we will have to really write down accurately our keys having in mind that a mistype is fatal. Also, we should keep in mind that destruction of the key is fatal as well. The user has to secure the key from losing the keys, theft and destruction.
Lets start
You'll need:
Notes: We will be following https://www.swansontec.com/bitcoin-dice.html guidelines. We will be creating our own random key instead of downloading BitAddress javascript for safety reasons. Following this guideline lets you audit the code that will create the public address and bitcoin address. Its simple, short and you can always test the code by inputting a known private keys to tell if the bitcoin address generated is legit or not. This process is done offline, so your private key never touches the internet.
Steps
1. Download the bitcoin-bash-tools and dice2key scripts from Github, latest Ubuntu distribution, and LiLi, A software to install Ubuntu on our flash drive (easier than what is proposed on Swansontec)

2. Install the live environment in a CD or USB, and paste the tools we are going to use inside of it (they are going to be located in file://cdrom)

  • Open up LiLi and insert your flash drive.

  • Make sure you’ve selected the correct drive (click refresh if drive isn’t showing).
  • Choose “ISO/IMG/ZIP” and select the Ubuntu ISO file you’ve downloaded in the previous step.
  • Make sure only “Format the key in FAT32” is selected.
  • Click the lightning bolt to start the format and installation process
  • [https://99bitcoins.com/bitcoin-wallet/pape\](https://99bitcoins.com/bitcoin-wallet/pape)

    3. Open the Ubuntu environment in a offline computer that will never touch the internet again (there is some malware that infect the BIOS so doing it in your regular computer is not safe to my understanding)

    Restart your computer. Clicking F12 or F1 during the boot-up process will allow you to choose to run your operating system from your flash drive or CD. After the Ubuntu operating system loads you will choose the “try Ubuntu” option.
    4. Roll the dice 100 times and convert into a 32-byte hexadecimal number by using dice2key

    To generate a Bitcoin private key using normal, run the following command to convert the dice rolls into a 32-byte hexadecimal number:source dice2key (100 six-sided dice rolls)

    5. Run newBitcoinKey 0x + your private key and it will give you your: public address, bitcoin address and WIF.Save the Private Key and Bitcoin Address. Check several times that you handwritten it correctly. You can check by re entering the code in the console from your paper. (I recommend writing down the Private Key which is in HEX and not the WIF since this one is key sensitive and you can lose it, or write it wrong. Also, out of the private key you can get the WIF which will let you transfer your funds). If you lose your key, you lose your funds. Be careful.
    If auditing the code for this is not enough for you, you can also test the code by inputting a known private keys to tell if the bitcoin address generated is legit or not.
    I recommend you generate several keys and addresses as this process is not super easy to do. Remember that you should never reuse your paper wallets (meaning that you should empty all of the funds from this one adress if you are making a payment). As such, a couple of addresses come handy.
    At this point, there should be no way for information to leak out of the live CD environment. The live CD doesn't store anything on the hard disk, and there is no network connection. Everything that happens from now on will be lost when the computer is rebooted.
    Now, start the "Terminal" program, and type the following command:
    source ~/bitcoin.shThis will load the address-calculation script. Now, use the script to find the Bitcoin address for your private key:
    newBitcoinKey 0x(your dice digits)Replace the part that says "(your dice digits)" with 64 digits found by rolling your pair of hexadecimal dice 32 times. Be sure there is no space between the "0x" and your digits. When all is said and done, your terminal window should look like this:
    [email protected]:~$ source ~/[email protected]:~$ newBitcoinKey 0x8010b1bb119ad37d4b65a1022a314897b1b3614b345974332cb1b9582cf03536---secret exponent: 0x8010B1BB119AD37D4B65A1022A314897B1B3614B345974332CB1B9582CF03536public key: X: 09BA8621AEFD3B6BA4CA6D11A4746E8DF8D35D9B51B383338F627BA7FC732731 Y: 8C3A6EC6ACD33C36328B8FB4349B31671BCD3A192316EA4F6236EE1AE4A7D8C9compressed: WIF: L1WepftUBemj6H4XQovkiW1ARVjxMqaw4oj2kmkYqdG1xTnBcHfC bitcoin address: 1HV3WWx56qD6U5yWYZoLc7WbJPV3zAL6Hiuncompressed: WIF: 5JngqQmHagNTknnCshzVUysLMWAjT23FWs1TgNU5wyFH5SB3hrP bitcoin address: [email protected]:~$The script produces two public addresses from the same private key. The "compressed" address format produces smaller transaction sizes (which means lower transaction fees), but it's newer and not as well-supported as the original "uncompressed" format. Choose which format you like, and write down the "WIF" and "bitcoin address" on a piece of paper. The "WIF" is just the private key, converted to a slightly shorter format that Bitcoin wallet apps prefer.
    Double-check your paper, and reboot your computer. Aside from the copy on the piece of paper, the reboot should destroy all traces of the private key. Since the paper now holds the only copy of the private key, do not lose it, or you will lose the ability to spend any funds sent to the address!
    Conclusion
    With this method you are creating an airgapped environment that will never touch the internet. Also, we are checking that the code we use its not tampered. If this is followed strictly I see virtually no chances of your keys being hacked.
    How to spend your funds from a securely generated paper wallet.
    Almost all tutorials seen online, will let you import or sweep you private keys into the desktop wallet or mobile wallet which are hot wallets. In the meantime, you are exposed and all of your work to secure the cold storage is being thrown away. This method will let you sign the transaction offline (you will not expose your private key in an online system).
    You'll need:
    The source of this method is taken from CryptoGuide from Youtube https://www.youtube.com/watch?v=-9kf9LMnJpI&t=86s . Basically you can follow his video as it is foolproof. Please check that Electrum distribution is signed.
    The summarized steps are:
    Download Electrum on both devices and check its signed for safey.Disconnect your phone from the internet (flight mode= All connections off) and input your private key in ElectrumGenerate the transaction in your desktop and export it via QR (never leave unspent BTC or you will lose them)In your phone, open Electrum > Send > QR (this will import the transaction) and scan the desktop exported transactionSign the transaction in your phone.Export the signed transaction in QRLoad the signed transaction in the desktop Electrum and broadcast it to the network.Wait until 3 confirmations to connect your phone to the internet again.
    Ideas for improvement:
    So thats it. I hope someone can find this helpful or help in creating a better method. If you like, you can donate at 1Che7FG93vDsbes6NPBhYuz29wQoW7qFUH
    submitted by Heron-Express to Bitcoin [link] [comments]

    Forex Signals Reddit: top providers review (part 1)

    Forex Signals Reddit: top providers review (part 1)

    Forex Signals - TOP Best Services. Checked!

    To invest in the financial markets, we must acquire good tools that help us carry out our operations in the best possible way. In this sense, we always talk about the importance of brokers, however, signal systems must also be taken into account.
    The platforms that offer signals to invest in forex provide us with alerts that will help us in a significant way to be able to carry out successful operations.
    For this reason, we are going to tell you about the importance of these alerts in relation to the trading we carry out, because, without a doubt, this type of system will provide us with very good information to invest at the right time and in the best assets in the different markets. financial
    Within this context, we will focus on Forex signals, since it is the most important market in the world, since in it, multiple transactions are carried out on a daily basis, hence the importance of having an alert system that offers us all the necessary data to invest in currencies.
    Also, as we all already know, cryptocurrencies have become a very popular alternative to investing in traditional currencies. Therefore, some trading services/tools have emerged that help us to carry out successful operations in this particular market.
    In the following points, we will detail everything you need to know to start operating in the financial markets using trading signals: what are signals, how do they work, because they are a very powerful help, etc. Let's go there!

    What are Forex Trading Signals?

    https://preview.redd.it/vjdnt1qrpny51.jpg?width=640&format=pjpg&auto=webp&s=bc541fc996701e5b4dd940abed610b59456a5625
    Before explaining the importance of Forex signals, let's start by making a small note so that we know what exactly these alerts are.
    Thus, we will know that the signals on the currency market are received by traders to know all the information that concerns Forex, both for assets and for the market itself.
    These alerts allow us to know the movements that occur in the Forex market and the changes that occur in the different currency pairs. But the great advantage that this type of system gives us is that they provide us with the necessary information, to know when is the right time to carry out our investments.
    In other words, through these signals, we will know the opportunities that are presented in the market and we will be able to carry out operations that can become quite profitable.
    Profitability is precisely another of the fundamental aspects that must be taken into account when we talk about Forex signals since the vast majority of these alerts offer fairly reliable data on assets. Similarly, these signals can also provide us with recommendations or advice to make our operations more successful.

    »Purpose: predict movements to carry out Profitable Operations

    In short, Forex signal systems aim to predict the behavior that the different assets that are in the market will present and this is achieved thanks to new technologies, the creation of specialized software, and of course, the work of financial experts.
    In addition, it must also be borne in mind that the reliability of these alerts largely lies in the fact that they are prepared by financial professionals. So they turn out to be a perfect tool so that our investments can bring us a greater number of benefits.

    The best signal services today

    We are going to tell you about the 3 main alert system services that we currently have on the market. There are many more, but I can assure these are not scams and are reliable. Of course, not 100% of trades will be a winner, so please make sure you apply proper money management and risk management system.

    1. 1000pipbuilder (top choice)

    Fast track your success and follow the high-performance Forex signals from 1000pip Builder. These Forex signals are rated 5 stars on Investing.com, so you can follow every signal with confidence. All signals are sent by a professional trader with over 10 years investment experience. This is a unique opportunity to see with your own eyes how a professional Forex trader trades the markets.
    The 1000pip Builder Membership is ordinarily a signal service for Forex trading. You will get all the facts you need to successfully comply with the trading signals, set your stop loss and take earnings as well as additional techniques and techniques!
    You will get easy to use trading indicators for Forex Trades, including your entry, stop loss and take profit. Overall, the earnings target per months is 350 Pips, depending on your funding this can be a high profit per month! (In fact, there is by no means a guarantee, but the past months had been all between 600 – 1000 Pips).
    >>>Know more about 1000pipbuilder
    Your 1000pip builder membership gives you all in hand you want to start trading Forex with success. Read the directions and wait for the first signals. You can trade them inside your demo account first, so you can take a look at the performance before you make investments real money!
    Features:
    • Free Trial
    • Forex signals sent by email and SMS
    • Entry price, take profit and stop loss provided
    • Suitable for all time zones (signals sent over 24 hours)
    • MyFXBook verified performance
    • 10 years of investment experience
    • Target 300-400 pips per month
    Pricing:
    https://preview.redd.it/zjc10xx6ony51.png?width=668&format=png&auto=webp&s=9b0eac95f8b584dc0cdb62503e851d7036c0232b
    VISIT 1000ipbuilder here

    2. DDMarkets

    Digital Derivatives Markets (DDMarkets) have been providing trade alert offerings since May 2014 - fully documenting their change ideas in an open and transparent manner.
    September 2020 performance report for DD Markets.
    Their manner is simple: carry out extensive research, share their evaluation and then deliver a trading sign when triggered. Once issued, daily updates on the trade are despatched to members via email.
    It's essential to note that DDMarkets do not tolerate floating in an open drawdown in an effort to earnings at any cost - a common method used by less professional providers to 'fudge' performance statistics.
    Verified Statistics: Not independently verified.
    Price: plans from $74.40 per month.
    Year Founded: 2014
    Suitable for Beginners: Yes, (includes handy to follow trade analysis)
    VISIT
    -------

    3. JKonFX

    If you are looking or a forex signal service with a reliable (and profitable) music record you can't go previous Joel Kruger and the team at JKonFX.
    Trading performance file for JKonFX.
    Joel has delivered a reputable +59.18% journal performance for 2016, imparting real-time technical and fundamental insights, in an extremely obvious manner, to their 30,000+ subscriber base. Considered a low-frequency trader, alerts are only a small phase of the overall JKonFX subscription. If you're searching for hundreds of signals, you may want to consider other options.
    Verified Statistics: Not independently verified.
    Price: plans from $30 per month.
    Year Founded: 2014
    Suitable for Beginners: Yes, (includes convenient to follow videos updates).
    VISIT

    The importance of signals to invest in Forex

    Once we have known what Forex signals are, we must comment on the importance of these alerts in relation to our operations.
    As we have already told you in the previous paragraph, having a system of signals to be able to invest is quite advantageous, since, through these alerts, we will obtain quality information so that our operations end up being a true success.

    »Use of signals for beginners and experts

    In this sense, we have to say that one of the main advantages of Forex signals is that they can be used by both beginners and trading professionals.
    As many as others can benefit from using a trading signal system because the more information and resources we have in our hands. The greater probability of success we will have. Let's see how beginners and experts can take advantage of alerts:
    • Beginners: for inexperienced these alerts become even more important since they will thus have an additional tool that will guide them to carry out all operations in the Forex market.
    • Professionals: In the same way, professionals are also recommended to make use of these alerts, so they have adequate information to continue bringing their investments to fruition.
    Now that we know that both beginners and experts can use forex signals to invest, let's see what other advantages they have.

    »Trading automation

    When we dedicate ourselves to working in the financial world, none of us can spend 24 hours in front of the computer waiting to perform the perfect operation, it is impossible.
    That is why Forex signals are important, because, in order to carry out our investments, all we will have to do is wait for those signals to arrive, be attentive to all the alerts we receive, and thus, operate at the right time according to the opportunities that have arisen.
    It is fantastic to have a tool like this one that makes our work easier in this regard.

    »Carry out profitable Forex operations

    These signals are also important, because the vast majority of them are usually quite profitable, for this reason, we must get an alert system that provides us with accurate information so that our operations can bring us great benefits.
    But in addition, these Forex signals have an added value and that is that they are very easy to understand, therefore, we will have a very useful tool at hand that will not be complicated and will end up being a very beneficial weapon for us.

    »Decision support analysis

    A system of currency market signals is also very important because it will help us to make our subsequent decisions.
    We cannot forget that, to carry out any type of operation in this market, previously, we must meditate well and know the exact moment when we will know that our investments are going to bring us profits .
    Therefore, all the information provided by these alerts will be a fantastic basis for future operations that we are going to carry out.

    »Trading Signals made by professionals

    Finally, we have to recall the idea that these signals are made by the best professionals. Financial experts who know perfectly how to analyze the movements that occur in the market and changes in prices.
    Hence the importance of alerts, since they are very reliable and are presented as a necessary tool to operate in Forex and that our operations are as profitable as possible.

    What should a signal provider be like?

    https://preview.redd.it/j0ne51jypny51.png?width=640&format=png&auto=webp&s=5578ff4c42bd63d5b6950fc6401a5be94b97aa7f
    As you have seen, Forex signal systems are really important for our operations to bring us many benefits. For this reason, at present, there are multiple platforms that offer us these financial services so that investing in currencies is very simple and fast.
    Before telling you about the main services that we currently have available in the market, it is recommended that you know what are the main characteristics that a good signal provider should have, so that, at the time of your choice, you are clear that you have selected one of the best systems.

    »Must send us information on the main currency pairs

    In this sense, one of the first things we have to comment on is that a good signal provider, at a minimum, must send us alerts that offer us information about the 6 main currencies, in this case, we refer to the euro, dollar, The pound, the yen, the Swiss franc, and the Canadian dollar.
    Of course, the data you provide us will be related to the pairs that make up all these currencies. Although we can also find systems that offer us information about other minorities, but as we have said, at a minimum, we must know these 6.

    »Trading tools to operate better

    Likewise, signal providers must also provide us with a large number of tools so that we can learn more about the Forex market.
    We refer, for example, to technical analysis above all, which will help us to develop our own strategies to be able to operate in this market.
    These analyzes are always prepared by professionals and study, mainly, the assets that we have available to invest.

    »Different Forex signals reception channels

    They must also make available to us different ways through which they will send us the Forex signals, the usual thing is that we can acquire them through the platform's website, or by a text message and even through our email.
    In addition, it is recommended that the signal system we choose sends us a large number of alerts throughout the day, in order to have a wide range of possibilities.

    »Free account and customer service

    Other aspects that we must take into account to choose a good signal provider is whether we have the option of receiving, for a limited time, alerts for free or the profitability of the signals they emit to us.
    Similarly, a final aspect that we must emphasize is that a good signal system must also have excellent customer service, which is available to us 24 hours a day and that we can contact them at through an email, a phone number, or a live chat, for greater immediacy.
    Well, having said all this, in our last section we are going to tell you which are the best services currently on the market. That is, the most suitable Forex signal platforms to be able to work with them and carry out good operations. In this case, we will talk about ForexPro Signals, 365 Signals and Binary Signals.

    Forex Signals Reddit: conclusion

    To be able to invest properly in the Forex market, it is convenient that we get a signal system that provides us with all the necessary information about this market. It must be remembered that Forex is a very volatile market and therefore, many movements tend to occur quickly.
    Asset prices can change in a matter of seconds, hence the importance of having a system that helps us analyze the market and thus know, what is the right time for us to start operating.
    Therefore, although there are currently many signal systems that can offer us good services, the three that we have mentioned above are the ones that are best valued by users, which is why they are the best signal providers that we can choose to carry out. our investments.
    Most of these alerts are quite profitable and in addition, these systems usually emit a large number of signals per day with full guarantees. For all this, SignalsForexPro, Signals365, or SignalsBinary are presented as fundamental tools so that we can obtain a greater number of benefits when we carry out our operations in the currency market.
    submitted by kayakero to makemoneyforexreddit [link] [comments]

    365 Binary Option - YouTube Why Computer System understands only Binary Code ( 0 & 1 ) Why Computer understand only 1 & 0 Binary ? - YouTube Why Computers Understand Only Binary Language ? कंप्यूटर सिर्फ बाइनरी ही क्यों समझता है ? Why Do Computers Use 1s and 0s? Binary and Transistors ... Why computer understands only 0's and 1's How 0 , 1 Binary Makes Everything In Computer? [Hindi] Best Way To Be Successful In Binary Option Why do Computers understand only Binary?  Why not any other language?  BITS TO BYTES

    If the data is not converted into binary – a series of 1s and 0s – the computer will simply not understand it or be able to process it. Representing text. When any key on a keyboard is pressed ... Why Should I Understand How Binary Code Works? But if binary code is something only computers understand, why should you learn more about it? You are absolutely right – you will (most probably) never write a computer programs in binary code. Instead, developers like you and I use other, more user-friendly programming languages to give instructions to computers. Nevertheless, binary code is ... Basically, binary options trading involves making predictions on whether an underlying asset is going to go up or down. In this trading, there are only two outcomes: you either win or lose. As a trader, it helps to understand binary options trading before you begin trading. Binary trading is different from traditional options, and you will find that it has different fees, risks, and payouts. Because a computer is composed of electronics components. Electronic is the use of electricity to create logical circuit to perform different tasks. Like opening the radio, emitting a frequency to communicate with another equipment like a controll... I'm following right along with your comments about binary versus any other set. I understand the 'on'/'off' and present/not present and the reliability of such. So I understand why we are currently using binary. From your comments I see why what I wonder about will be difficult and likely some time off. But I would like your thoughts on this. Why To Use Strategies While Trading Binary Options There’s no doubt that financial instruments can appear intimidating. When news about the financial markets appears on TV, you’ll often see financial traders sweating over complicated-looking graphs on multiple computer monitors or barking at each other across crowded trading floors. Binary means one or the other. A binary choice, for example, involves picking one of two possible options. A binary number is one described using the base-2 numeral system, which uses only two different symbols, or numerals: usually 0 and 1. All numbers in a base-2 numeral system are denoted using one or the other of these symbols. Each single digit is referred to as a bit. In our everyday ... Trading binary options may not be suitable for everyone. Trading CFDs carries a high level of risk since leverage can work both to your advantage and disadvantage. As a result, the products offered on this website may not be suitable for all investors because of the risk of losing all of your invested capital. You should never invest money that you cannot afford to lose, and never trade with ... Binary Options Trading Scam: How It Works. More and more people are sending us emails asking if the binary options trading sector is a scam in itself, as many horror stories have been shared lately on the Internet. Whether it’s about binary options brokers, signals, or winning strategies, watch out for the big and sketchy world of this business. What is binary? Binary is a number system that only uses two digits: 1 and 0. All information that is processed by a computer is in the form of a sequence of 1s and 0s. Therefore, all data that we ...

    [index] [26187] [10655] [1683] [17523] [11391] [6883] [21167] [24859] [21974] [6472]

    365 Binary Option - YouTube

    Why computer Understand only 0's and 1's? Computer uses Only 0 and 1.Easy Explanation about how computer understands 0 & 1. Computer use binary only. why? we... The computer system makes use binary number system which consist of only 0 and 1 . In this lecture we will discuss why computer makes use of binary number system. Computers understand each and everything in the form of Binary.They convert every information/data into binary first and then store it.Since they are electronic in nature ,they work on Digital ... Hello dosto, jaisa ki aap jaante honge ki computer binary digits pe kaam karta hai jo ki hota hai 0 aur 1.. par kya aapne kabhi socha hai ki kaise bus in do ... Want to support me? Patreon: https://www.patreon.com/H3Vtux A short explanation of binary. Upon reviewing the finished video I realized I made a mistake in s... Why am I so confident because what I am using to pull out daily money from binary option was developed by me, no one else on our planet has it. It is a combination of exponential calculation ... We use computers every day, but how often do we stop and think, “Why Computer understand only 1 & 0?” Here a short explanation about Binary.1) Java Book - https... why computer understands only 0 and 1. why computer understand only binary language in Hindi. कंप्यूटर सिर्फ बाइनरी ही क्यों समझता ... In this video we will see why computer understands only binary language i.e. the language of 0's and 1's. In the 20th century, if you wanted to trade on the stock market, but didn’t want to do it yourself, you had only one option; you needed to have a stockbroker on your payroll to enact trades on ...

    https://binaryoptiontrade.carescomppreg.gq