Binary Arrow indicator - Download MT4 indicator

The Next Processor Change is Within ARMs Reach

As you may have seen, I sent the following Tweet: “The Apple ARM MacBook future is coming, maybe sooner than people expect”
Today, I would like to further elaborate on that.
tl;dr Apple will be moving to Arm based macs in what I believe are 4 stages, starting around 2015 and ending around 2023-2025: Release of T1 chip Macbooks, release of T2 chip Macbooks, Release of at least one lower end model Arm Macbook, and transitioning full lineup to Arm. Reasons for each are below.
Apple is very likely going to switch to switch their CPU platform to their in-house silicon designs with an ARM architecture. This understanding is a fairly common amongst various Apple insiders. Here is my personal take on how this switch will happen and be presented to the consumer.
The first question would likely be “Why would Apple do this again?”. Throughout their history, Apple has already made two other storied CPU architecture switches - first from the Motorola 68k to PowerPC in the early 90s, then from PowerPC to Intel in the mid 2000s. Why make yet another? Here are the leading reasons:
A common refrain heard on the Internet is the suggestion that Apple should switch to using CPUs made by AMD, and while this has been considered internally, it will most likely not be chosen as the path forward, even for their megalithic giants like the Mac Pro. Even though AMD would mitigate Intel’s current set of problems, it does nothing to help the issue of the x86_64 architecture’s problems and inefficiencies, on top of jumping to a platform that doesn’t have a decade of proven support behind it. Why spend a lot of effort re-designing and re- optimizing for AMD’s platform when you can just put that effort into your own, and continue the vertical integration Apple is well-known for?
I believe that the internal development for the ARM transition started around 2015/2016 and is considered to be happening in 4 distinct stages. These are not all information from Apple insiders; some of these these are my own interpretation based off of information gathered from supply-chain sources, examination of MacBook schematics, and other indicators from Apple.

Stage1 (from 2014/2015 to 2017):

The rollout of computers with Apple’s T1 chip as a coprocessor. This chip is very similar to Apple’s T8002 chip design, which was used for the Apple Watch Series 1 and Series 2. The T1 is primarily present on the first TouchID enabled Macs, 2016 and 2017 model year MacBook Pros.
Considering the amount of time required to design and validate a processor, this stage most likely started around 2014 or 2015, with early experimentation to see whether an entirely new chip design would be required, or if would be sufficient to repurpose something in the existing lineup. As we can see, the general purpose ARM processors aren’t a one- trick pony.
To get a sense of the decision making at the time, let’s look back a bit. The year is 2016, and we're witnessing the beginning of stagnation of Intel processor lineup. There is not a lot to look forward to other than another “+” being added to the 14nm fabrication process. The MacBook Pro has used the same design for many years now, and its age is starting to show. Moving to AMD is still very questionable, as they’ve historically not been able to match Intel’s performance or functionality, especially at the high end, and since the “Ryzen” lineup is still unreleased, there is absolutely no benchmarks or other data to show they are worth consideration, and AMD’s most recent line of “Bulldozer” processors were very poorly received. Now is probably as good a time as any to begin experimenting with the in-house ARM designs, but it’s not time to dive into the deep end yet, our chips are not nearly mature enough to compete, and it’s not yet certain how long Intel will be stuck in the mud. As well, it is widely understood that Apple and Intel have an exclusivity contract in exchange for advantageous pricing. Any transition would take considerable time and effort, and since there are no current viable alternative to Intel, the in-house chips will need to advance further, and breaching a contract with Intel is too great a risk. So it makes sense to start with small deployments, to extend the timeline, stretch out to the end of the contract, and eventually release a real banger of a Mac.
Thus, the 2016 Touch Bar MacBooks were born, alongside the T1 chip mentioned earlier. There are good reasons for abandoning the piece of hardware previously used for a similar purpose, the SMC or System Management Controller. I suspect that the biggest reason was to allow early analysis of the challenges that would be faced migrating Mac built- in peripherals and IO to an ARM-based controller, as well as exploring the manufacturing, power, and performance results of using the chips across a broad deployment, and analyzing any early failure data, then using this to patch any issues, enhance processes, and inform future designs looking towards the 2nd stage.
The former SMC duties now moved to T1 includes things like
The T1 chip also communicates with a number of other controllers to manage a MacBook’s behavior. Even though it’s not a very powerful CPU by modern standards, it’s already responsible for a large chunk of the machine’s operation. Moving control of these peripherals to the T1 chip also brought about the creation of the fabled BridgeOS software, a shrunken-down watchOS-based system that operates fully independently of macOS and the primary Intel processor.
BridgeOS is the first step for Apple’s engineering teams to begin migrating underlying systems and services to integrate with the ARM processor via BridgeOS, and it allowed internal teams to more easily and safely develop and issue firmware updates. Since BridgeOS is based on a standard and now well-known system, it means that they can leverage existing engineering expertise to flesh out the T1’s development, rather than relying on the more arcane and specialized SMC system, which operates completely differently and requires highly specific knowledge to work with. It also allows reuse of the same fabrication pipeline used for Apple Watch processors, and eliminated the need to have yet another IC design for the SMC, coming from a separate source, to save a bit on cost.
Also during this time, on the software side, “Project Marzipan”, today Catalyst, came into existence. We'll get to this shortly.
For the most part, this Stage 1 went without any major issues. There were a few firmware problems at first during the product launch, but they were quickly solved with software updates. Now that engineering teams have had experience building for, manufacturing, and shipping the T1 systems, Stage 2 would begin.

Stage2 (2018-Present):

Stage 2 encompasses the rollout of Macs with the T2 coprocessor, replacing the T1. This includes a much wider lineup, including MacBook Pro with Touch Bar, starting with 2018 models, MacBook Air starting with 2018 models, the iMac Pro, the 2019 Mac Pro, as well as Mac Mini starting in 2018.
With this iteration, the more powerful T8012 processor design was used, which is a further revision of the T8010 design that powers the A10 series processors used in the iPhone 7. This change provided a significant increase in computational ability and brought about the integration of even more devices into T2. In addition to the T1’s existing responsibilities, T2 now controls:
Those last 2 points are crucial for Stage 2. Under this new paradigm, the vast majority of the Mac is now under the control of an in-house ARM processor. Stage 2 also brings iPhone-grade hardware security to the Mac. These T2 models also incorporated a supported DFU (Device Firmware Update, more commonly “recovery mode”), which acts similarly to the iPhone DFU mode and allows restoration of the BridgeOS firmware in the event of corruption (most commonly due to user-triggered power interruption during flashing).
Putting more responsibility onto the T2 again allows for Apple’s engineering teams to do more early failure analysis on hardware and software, monitor stability of these machines, experiment further with large-scale production and deployment of this ARM platform, as well as continue to enhance the silicon for Stage 3.
A few new user-visible features were added as well in this stage, such as support for the passive “Hey Siri” trigger, and offloading image and video transcoding to the T2 chip, which frees up the main Intel processor for other applications. BridgeOS was bumped to 2.0 to support all of these changes and the new chip.
On the macOS software side, what was internally known as Project Marzipan was first demonstrated to the public. Though it was originally discovered around 2017, and most likely began development and testing within later parts of Stage 1, its effects could be seen in 2018 with the release of iPhone apps, now running on the Mac using the iOS SDKs: Voice Recorder, Apple News, Home, Stocks, and more, with an official announcement and public release at WWDC in 2019. Catalyst would come to be the name of Marzipan used publicly. This SDK release allows app developers to easily port iOS apps to run on macOS, with minimal or no code changes, and without needing to develop separate versions for each. The end goal is to allow developers to submit a single version of an app, and allow it to work seamlessly on all Apple platforms, from Watch to Mac. At present, iOS and iPadOS apps are compiled for the full gamut of ARM instruction sets used on those devices, while macOS apps are compiled for x86_64. The logical next step is to cross this bridge, and unify the instruction sets.
With this T2 release, the new products using it have not been quite as well received as with the T1. Many users have noticed how this change contributes further towards machines with limited to no repair options outside of Apple’s repair organization, as well as some general issues with bugs in the T2.
Products with the T2 also no longer have the “Lifeboat” connector, which was previously present on 2016 and 2017 model Touch Bar MacBook Pro. This connector allowed a certified technician to plug in a device called a CDM Tool (Customer Data Migration Tool) to recover data off of a machine that was not functional. The removal of this connector limits the options for data recovery in the event of a problem, and Apple has never offered any data recovery service, meaning that a irreparable failure of the T2 chip or the primary board would result in complete data loss, in part due to the strong encryption provided by the T2 chip (even if the data got off, the encryption keys were lost with the T2 chip). The T2 also brought about the linkage of component serial numbers of certain internal components, such as the solid state storage, display, and trackpad, among other components. In fact, many other controllers on the logic board are now also paired to the T2, such as the WiFi and Bluetooth controller, the PMIC (Power Management Controller), and several other components. This is the exact same system used on newer iPhone models and is quite familiar to technicians who repair iPhone logic boards. While these changes are fantastic for device security and corporate and enterprise users, allowing for a very high degree of assurance that devices will refuse to boot if tampered with in any way - even from storied supply chain attacks, or other malfeasance that can be done with physical access to a machine - it has created difficulty with consumers who more often lack the expertise or awareness to keep critical data backed up, as well as the funds to perform the necessary repairs from authorized repair providers. Other issues reported that are suspected to be related to T2 are audio “cracking” or distortion on the internal speakers, and the BridgeOS becoming corrupt following a firmware update resulting in a machine that can’t boot.
I believe these hiccups will be properly addressed once macOS is fully integrated with the ARM platform. This stage of the Mac is more like a chimera of an iPhone and an Intel based computer. Technically, it does have all of the parts of an iPhone present within it, cellular radio aside, and I suspect this fusion is why these issues exist.
Recently, security researchers discovered an underlying security problem present within the Boot ROM code of the T1 and T2 chip. Due to being the same fundamental platform as earlier Apple Watch and iPhone processors, they are vulnerable to the “checkm8” exploit (CVE-2019-8900). Because of how these chips operate in a Mac, firmware modifications caused by use of the exploit will persist through OS reinstallation and machine restarts. Both the T1 and T2 chips are always on and running, though potentially in a heavily reduced power usage state, meaning the only way to clean an exploited machine is to reflash the chip, triggering a restart, or to fully exhaust or physically disconnect the battery to flush its memory. Fortunately, this exploit cannot be done remotely and requires physical access to the Mac for an extended duration, as well as a second Mac to perform the change, so the majority of users are relatively safe. As well, with a very limited execution environment and access to the primary system only through a “mailbox” protocol, the utility of exploiting these chips is extremely limited. At present, there is no known malware that has used this exploit. The proper fix will come with the next hardware revision, and is considered a low priority due to the lack of practical usage of running malicious code on the coprocessor.
At the time of writing, all current Apple computers have a T2 chip present, with the exception of the 2019 iMac lineup. This will change very soon with the expected release of the 2020 iMac lineup at WWDC, which will incorporate a T2 coprocessor as well.
Note: from here on, this turns entirely into speculation based on info gathered from a variety of disparate sources.
Right now, we are in the final steps of Stage 2. There are strong signs that an a MacBook (12”) with an ARM main processor will be announced this year at WWDC (“One more thing...”), at a Fall 2020 event, Q1 2021 event, or WWDC 2021. Based on the lack of a more concrete answer, WWDC2020 will likely not see it, but I am open to being wrong here.

Stage3 (Present/2021 - 2022/2023):

Stage 3 involves the first version of at least one fully ARM-powered Mac into Apple’s computer lineup.
I expect this will come in the form of the previously-retired 12” MacBook. There are rumors that Apple is still working internally to perfect the infamous Butterfly keyboard, and there are also signs that Apple is developing an A14x based processors with 8-12 cores designed specifically for use as the primary processor in a Mac. It makes sense that this model could see the return of the Butterfly keyboard, considering how thin and light it is intended to be, and using an A14x processor would make it will be a very capable, very portable machine, and should give customers a good taste of what is to come.
Personally, I am excited to test the new 12" “ARMbook”. I do miss my own original 12", even with all the CPU failure issues those older models had. It was a lovely form factor for me.
It's still not entirely known whether the physical design of these will change from the retired version, exactly how many cores it will have, the port configuration, etc. I have also heard rumors about the 12” model possibly supporting 5G cellular connectivity natively thanks to the A14 series processor. All of this will most likely be confirmed soon enough.
This 12” model will be the perfect stepping stone for stage 3, since Apple’s ARM processors are not yet a full-on replacement for Intel’s full processor lineup, especially at the high end, in products such as the upcoming 2020 iMac, iMac Pro, 16” MacBook Pro, and the 2019 Mac Pro.
Performance of Apple’s ARM platform compared to Intel has been a big point of contention over the last couple years, primarily due to the lack of data representative of real-world desktop usage scenarios. The iPad Pro and other models with Apple’s highest-end silicon still lack the ability to execute a lot of high end professional applications, so data about anything more than video editing and photo editing tasks benchmarks quickly becomes meaningless. While there are completely synthetic benchmarks like Geekbench, Antutu, and others, to try and bridge the gap, they are very far from being accurate or representative of the real real world performance in many instances. Even though the Apple ARM processors are incredibly powerful, and I do give constant praise to their silicon design teams, there still just isn’t enough data to show how they will perform for real-world desktop usage scenarios, and synthetic benchmarks are like standardized testing: they only show how good a platform is at running the synthetic benchmark. This type of benchmark stresses only very specific parts of each chip at a time, rather than how well it does a general task, and then boil down the complexity and nuances of each chip into a single numeric score, which is not a remotely accurate way of representing processors with vastly different capabilities and designs. It would be like gauging how well a person performs a manual labor task based on averaging only the speed of every individual muscle in the body, regardless of if, or how much, each is used. A specific group of muscles being stronger or weaker than others could wildly skew the final result, and grossly misrepresent performance of the person as a whole. Real world program performance will be the key in determining the success and future of this transition, and it will have to be great on this 12" model, but not just in a limited set of tasks, it will have to be great at *everything*. It is intended to be the first Horseman of the Apocalypse for the Intel Mac, and it better behave like one. Consumers have been expecting this, especially after 15 years of Intel processors, the continued advancement of Apple’s processors, and the decline of Intel’s market lead.
The point of this “demonstration” model is to ease both users and developers into the desktop ARM ecosystem slowly. Much like how the iPhone X paved the way for FaceID-enabled iPhones, this 12" model will pave the way towards ARM Mac systems. Some power-user type consumers may complain at first, depending on the software compatibility story, then realize it works just fine since the majority of the computer users today do not do many tasks that can’t be accomplished on an iPad or lower end computer. Apple needs to gain the public’s trust for basic tasks first, before they will be able to break into the market of users performing more hardcore or “Pro” tasks. This early model will probably not be targeted at these high-end professionals, which will allow Apple to begin to gather early information about the stability and performance of this model, day to day usability, developmental issues that need to be addressed, hardware failure analysis, etc. All of this information is crucial to Stage 4, or possibly later parts of Stage 3.
The 2 biggest concerns most people have with the architecture change is app support and Bootcamp.
Any apps released through the Mac App Store will not be a problem. Because App Store apps are submitted as LLVM IR (“Bitcode”), the system can automatically download versions compiled and optimized for ARM platforms, similar to how App Thinning on iOS works. For apps distributed outside the App Store, thing might be more tricky. There are a few ways this could go:
As for Bootcamp, while ARM-compatible versions of Windows do exist and are in development, they come with their own similar set of app support problems. Microsoft has experimented with emulating x86_64 on their ARM-based Surface products, and some other OEMs have created their own Windows-powered ARM laptops, but with very little success. Performance is a problem across the board, with other ARM silicon not being anywhere near as advanced, and with the majority of apps in the Windows ecosystem that were not developed in-house at Microsoft running terribly due to the x86_64 emulation software. If Bootcamp does come to the early ARM MacBook, it more than likely will run like very poorly for anything other than Windows UWP apps. There is a high chance it will be abandoned entirely until Windows becomes much more friendly to the architecture.
I believe this will also be a very crucial turning point for the MacBook lineup as a whole. At present, the iPad Pro paired with the Magic Keyboard is, in many ways, nearly identical to a laptop, with the biggest difference being the system software itself. While Apple executives have outright denied plans of merging the iPad and MacBook line, that could very well just be a marketing stance, shutting the down rumors in anticipation of a well-executed surprise. I think that Apple might at least re-examine the possibility of merging Macs and iPads in some capacity, but whether they proceed or not could be driven by consumer reaction to both products. Do they prefer the feel and usability of macOS on ARM, and like the separation of both products? Is there success across the industry of the ARM platform, both at the lower and higher end of the market? Do users see that iPadOS and macOS are just 2 halves of the same coin? Should there be a middle ground, and a new type of product similar to the Surface Book, but running macOS? Should Macs and iPads run a completely uniform OS? Will iPadOS ever see exposed the same sort of UNIX-based tools for IT administrators and software developers that macOS has present? These are all very real questions that will pop up in the near future.
The line between Stage 3 and Stage 4 will be blurry, and will depend on how Apple wishes to address different problems going forward, and what the reactions look like. It is very possible that only 12” will be released at first, or a handful more lower end model laptop and desktop products could be released, with high performance Macs following in Stage 4, or perhaps everything but enterprise products like Mac Pro will be switched fully. Only time will tell.

Stage 4 (the end goal):

Congratulations, you’re made it to the end of my TED talk. We are now well into the 2020s and COVID-19 Part 4 is casually catching up to the 5G = Virus crowd. All Macs have transitioned fully to ARM. iMac, MacBooks Pro and otherwise, Mac Pro, Mac Mini, everything. The future is fully Apple from top to bottom, and vertical integration leading to market dominance continues. Many other OEM have begun to follow in this path to some extent, creating more demand for a similar class of silicon from other firms.
The remainder here is pure speculation with a dash of wishful thinking. There are still a lot of things that are entirely unclear. The only concrete thing is that Stage 4 will happen when everything is running Apple’s in- house processors.
By this point, consumers will be quite familiar with the ARM Macs existing, and developers have had have enough time to transition apps fully over to the newly unified system. Any performance, battery life, or app support concerns will not be an issue at this point.
There are no more details here, it’s the end of the road, but we are left with a number of questions.
It is unclear if Apple will stick to AMD's GPUs or whether they will instead opt to use their in-house graphics solutions that have been used since the A11 series of processors.
How Thunderbolt support on these models of Mac will be achieved is unknown. While Intel has made it openly available for use, and there are plans to have USB and Thunderbolt combined in a single standard, it’s still unclear how it will play along with Apple processors. Presently, iPhones do support connecting devices via PCI Express to the processor, but it has only been used for iPhone and iPad storage. The current Apple processors simply lack the number of lanes required for even the lowest end MacBook Pro. This is an issue that would need to be addressed in order to ship a full desktop-grade platform.
There is also the question of upgradability for desktop models, and if and how there will be a replaceable, socketed version of these processors. Will standard desktop and laptop memory modules play nicely with these ARM processors? Will they drop standard memory across the board, in favor of soldered options, or continue to support user-configurable memory on some models? Will my 2023 Mac Pro play nicely with a standard PCI Express device that I buy off the shelf? Will we see a return of “Mac Edition” PCI devices?
There are still a lot of unknowns, and guessing any further in advance is too difficult. The only thing that is certain, however, is that Apple processors coming to Mac is very much within arm’s reach.
submitted by Fudge_0001 to apple [link] [comments]

System Programming Language Ideas

I am an embedded electronics guy who has several years of experience in the industry, mainly with writing embedded software in C at the high level and the low level. My goal is to start fresh with some projects in terms of software platforms, so I have been looking at whether to use existing programming languages. I want my electronics / software to be open, but therein lies part of the problem. I have experience using and evaluating many compilers during my experience such as the proprietary stuff (IAR) and open source stuff (clang , gcc, etc.). I have nothing against the open source stuff; however, the companies I have worked for (and I) always come crawling back to IAR. Why? Its not a matter of the compiler believe it or not! Its a matter of the linker.
I took a cursory look at the latest gnu / clang linkers and I do not think that have fixed the major issue we always had with these linkers: memory flood fill. Specifying where each object or section is in the memory is fine for small projects or very small teams (1 to 2 people). However, when you have a bigger team (> 2) and you are using microcontrollers with segmented memory (all memory blocks are not contiguous), memory flood fill becomes a requirement of the linker. Often is the case that the MCUs I and others work on do not have megabytes of memory, but kilobytes. The MCU is chosen for the project and if we are lucky to get one with lots of memory, then you know why such a chip was chosen - there is a large memory requirement in the software.. we would not choose a large memory part if we did not need it due to cost. Imagine a developer is writing a library or piece of code whose memory requirement is going to change by single or tens kilobytes each (added or subtracted) commit. Now imagine having to have this developer manually manage the linker script for their particular dev station each time to make sure the linker doesn't cough based on what everybody else has put it in there. On top of that, they need to manually manage the script if it needs to be changed when they commit and hope that nobody else needed to change it as well for whatever they were developing. For even a small amount of developers, manually managing the script has way too many moving parts to be efficient. Memory flood fill solves this problem. IAR (in addition to a few other linkers like Segger's) allow me to just say: "Here are the ten memory blocks on the device. I have a .text section. You figure out how to spread out all the data across those blocks." No manual script modifications required by each developer for their current dev or requirement to sync at the end when committing. It just works.
Now.. what's the next problem? I don't want to use IAR (or Segger)! Why? If my stuff is going to be open to the public on my repositories.. don't you think it sends the wrong message if I say: "Well, here is the source code everybody! But Oh sorry, you need to get a seat of IAR if you want to build it the way I am or figure out how to build it yourself with your own tool chain". In addition, let's say that we go with Segger's free stuff to get by the linker problem. Well, what if I want to make a sellable product based on the open software? Still need to buy a seat, because Segger only allows non commercial usage of their free stuff. This leaves me with using an open compiler.
To me, memory flood fill for the linker is a requirement. I will not use a C tool chain that does not have this feature. My compiler options are clang, gcc, etc. I can either implement a linker script generator or a linker itself. Since I do not need to support dynamic link libraries or any complicated virtual memory stuff in the linker, I think implementing a linker is easily doable. The linker script generator is the simple option, but its a hack and therefore I would not want to partake in it. Basically before the linker (LD / LLD) is invoked, I would go into all the object files and analyze all of their memory requirements and generate a linker script that implements the flood fill as a pre step. Breaking open ELF files and analyzing them is pretty easy - I have done it in the past. The pre step would have my own linker script format that includes provisions for memory flood fill. Since this is like invoking the linker twice.. its a hack and speed detriment for something that I think should have been a feature of LD / LLD decades ago. "Everybody is using gnu / clang with LD / LLD! Why do you think you need flood fill?" To that I respond with: "People who are using gnu / clang and LD / LLD are either on small teams (embedded) OR they are working with systems that have contiguous memory and don't have to worry about segmented memory. Case and point Phones, Laptops, Desktops, anything with external RAM" Pick one reason. I am sure there are other reasons beyond those two in which segmented memory is not an issue. Maybe the segmented memory blocks are so large that you can ignore most of them for one program - early Visual GDB had this issue.. you would go into the linker scripts to find that for chips like the old NXP 4000 series that they were only choosing a single RAM block for data memory because of the linker limitation. This actually horrendously turned off my company from using gnu / clang at the time. In embedded systems where MCUs are chosen based on cost, the amount of memory is specifically chosen to meet that cost. You can't just "ignore" a memory block due to linker limitations. This would require either to buy a different chip or more expensive chip that meets the memory requirements.
ANYWAYS.. long winded prelude to what has led me to looking at making my own programming language. TLDR: I want my software to be open.. I want people to be able to easily build it without shelling out an arm and a leg, and I am a person who is not fond of hacks because of, what I believe, are oversights in the design of existing software.
Why not use Rust, Nim, Go, Zig, any of those languages? No. Period. No. I work with small embedded systems running with small memory microcontrollers as well as a massive number of other companies / developers. Small embedded systems are what make most of the world turn. I want a systems programming language that is as simple as C with certain modern developer "niceties". This does not mean adding the kitchen sink.. generics, closures, classes ................ 50 other things because the rest of the software industry has been using these for years on higher level languages. It is my opinion that the reason that nothing has (or will) displace C in the past, present, or near future is because C is stupid simple. Its basically structures, functions, and pointers... that's it! Does it have its problems? Sure! However, at the end of the day developers can pick up a C program and go without a huge hassle. Why can't we have a language that sticks to this small subset or "core" functionality instead of trying to add the kitchen sink with all these features of other languages? Just give me my functions and structures, and iterate on that. Let's fix some of the developer productivity issues while we are at it.. and no I don't mean by adding generics and classes. I mean more of getting rid of header files and allowing CTFE. "D is what you want." No.. no it's not. That is a prime example of kitchen sink and the kitchen sink of 50 large corporations across the the block.
What are the problems I think need to be solved in a C replacement?
  1. Header files.
  2. Implementation hiding. Don't know the size of that structure without having to manually manage the size of that structure in a header or exposing all the fields of that structure in a header. Every change of the library containing that structure causes a recompile all the way up the chain on all dependencies.
  3. CTFE (compile time function execution). I want to be able to assign type safe constants to things on initialization.
  4. Pointers replaced with references? I am on the fence with this one. I love the power of pointers, but I realize after research where the industry is trying to go.
These are the things I think that need to be solved. Make my life easier as a developer, but also give me something as stupid simple as C.
I have some ideas of how to solve some of these problems. Disclaimer: some things may be hypocritical based on the prelude discussion; however, as often is the case, not 'every' discussion point is black and white.

  1. Header Files
Replace with a module / package system. There exists a project folder wherein there lies a .build script. The compiler runs the build script and builds the project. Building is part of the language / compiler, but dependency and versioning is not. People will be on both sides of the camp.. for or against this. However, it appears that most module type languages require specifying all of the input files up front instead of being able to "dumb compile" like C / C++ due to the fact that all source files are "truly" dumbly independent. Such a module build system would be harder to make parallel due to module dependencies; however, in total, required build "computation" (not necessarily time) is less. This is because the compiler knows everything up front that makes a library and doesn't have to spawn a million processes (each taking its own time) for each source file.
  1. Implementation hiding
What if it was possible to make a custom library format for the language? Libraries use this custom format and contain "deferrals" for a lot of things that need to be resolved. During packaging time, the final output stage, link time, whatever you want to call it (the executable output), the build tool resolves all of the deferrals because it now knows all parts of input "source" objects. What this means is that the last stage of the build process will most likely take the longest because it is also the stage that generates the code.
What is a deferral? Libraries are built with type information and IR like code for each of the functions. The IR code is a representation that can be either executed by interpreter (for CTFE) or converted to binary instructions at the last output stage. A deferral is a node within the library that requires to be resolved at the last stage. Think of it like an unresolved symbol but for mostly constants and structures.
Inside my library A I have a structure that has a bunch of fields. Those fields may be public or private. Another library B wants to derive from that structure. It knows the structure type exists and it has these public fields. The library can make usage of those public fields. Now at the link stage the size of the structure and all derivative structures and fields are resolved. A year down the road library A changes to add a private field to the structure. Library B doesn't care as long as the type name of the structure or its public members that it is using are not changed. Pull in the new library into the link stage and everything is resolved at that time.
I am an advocate for just having plain old C structures but having the ability to "derive" sub structures. Structures would act the same exact way as in C. Let's say you have one structure and then in a second structure you put the first field as the "base" field. This is what I want to have the ability to do in a language.. but built in support for it through derivation and implementation hiding. Memory layout would be exactly like in C. The structures are not classes or anything else.
I have an array of I2C ports in a library; however, I have no idea how many I2C ports there should be until link time. What to do!? I define a deferred constant for the size of the array that needs to be resolved at link time. At link time the build file passes the constant into the library. Or it gets passed as a command line argument.
What this also allows me to do is to provide a single library that can be built using any architecture at link time.
  1. CTFE
Having safe type checked ways to define constants or whatever, filled in by the compiler, I think is a very good mechanism. Since all of the code in libraries is some sort of IR, it can be interpreted at link time to fill in all the blanks. The compiler would have a massive emphasis on analyzing which things are constants in the source code and can be filled in at link time.
There would exist "conditional compilation" in that all of the code exists in the library; however, at link time the conditional compilation is evaluated and only the areas that are "true" are included in the final output.
  1. Pointers & References & Type safety
I like pointers, but I can see the industry trend to move away from them in newer languages. Newer languages seem to kneecap them compared to what you can do in C. I have an idea of a potential fix.
Pointers or some way is needed to be able to access hardware registers. What if the language had support for references and pointers, but pointers are limited to constants that are filled in by the build system? For example, I know hardware registers A, B, and C are at these locations (maybe filled in by CTFE) so I can declare them as constants. Their values can never be changed at runtime; however, what a pointer does is indicate to the compiler to access a piece of memory using indirection.
There would be no way to convert a pointer to a reference or vise versa. There is no way to assign a pointer to a different value or have it point anything that exists (variables, byte arrays, etc..). Then how do we perform a UART write with a block of data? I said there would be no way to convert a reference ( a byte array for example) to a pointer, but I did not say you could not take the address of a reference! I can take the address of a reference (which points to a block of variable memory) and convert to it to an integer. You can perform any math you want with that integer but you can't actually convert that integer back into a reference! As far as the compiler is concerned, the address of a reference is just integer data. Now I can pass that integer into a module that contains a pointer and write data to memory using indirection.
As far as the compiler is concerned, pointers are just a way to tell the compiler to indirectly read and write memory. It would treat pointers as a way to read and write integer data to memory by using indirection. There exists no mechanism to convert a pointer to a reference. Since pointers are essentially constants, and we have deferrals and CTFE, the compiler knows what all those pointers are and where they point to. Therefore it can assure that no variables are ever in a "pointed to range". Additionally, for functions that use pointers - let's say I have a block of memory where you write to each 1K boundary and it acts as a FIFO - the compiler could check to make sure you are not performing any funny business by trying to write outside a range of memory.
What are references? References are variables that consist of say 8 bytes of data. The first 4 bytes are an address and the next 4 bytes is type information. There exists a reference type (any) that be used for assigning any type to it (think void*). The compiler will determine if casts are safe via the type information and for casts it can't determine at build time, it will insert code to check the cast using the type information.
Functions would take parameters as ByVal or ByRef. For example DoSomething(ByRef ref uint8 val, uint8 val2, uint8[] arr). The first parameter is passing by reference a reference to a uint8 (think double pointer). Assigning to val assigns to the reference. The second parameter is passed by value. The third parameter (array type) is passed by reference implicitly.
  1. Other Notes
This is not an exhaustive list of all features I am thinking of. For example visibility modifiers - public, private, module for variables, constants, and functions. Additionally, things could have attributes like in C# to tell the compiler what to do with a function or structure. For example, a structure or field could have a volatile attribute.
I want integration into the language for inline assembly for the architecture. So you could place a function attribute like [Assembly(armv7)]. This could tell the compiler that the function is all armv7 assembly and the compiler will verify it. Having assembly integrated also allows all the language features to be available to the assembly like constants. Does this go against having an IR representation of the library? No. functions have weak or strong linkage. Additionally, there could be a function attribute to tell the compiler: "Hey when the link stage is using an armv7 target, build this function in". There could also be a mechanism for inline assembly and intrinsics.
Please keep in mind that my hope is not to see another C systems language for larger systems (desktop, phones, laptops, etc.) Its solely to see it for small embedded systems and microcontrollers. I think this is why many of the newer languages (Go, Nim, Zig, etc..) have not been adopted in embedded - they started large and certain things were tacked on to "maybe" support smaller devices. I also don't want to have a runtime with my embedded microcontroller; however, I am not averse to the compiler putting bounds checks and casting checks into the assembly when it needs to. For example, if a cast fails, the compiler could just trap in a "hook" defined by the user that includes the module and line number of where the cast failed. It doesn't even matter that the system hangs or locks up as long as I know where to look to fix the bug. I can't tell you how many times something like this would be invaluable for debugging. In embedded, many of us say that its better for the system to crash hard than limp along because of an array out of bounds or whatever. Maybe it would be possible to restart the system in the event of such a crash or do "something" (like for a cruise missile :)).
This is intended to be a discussion and not so much a religious war or to state I am doing this or that. I just wanted to "blurt out" some stuff I have had on my mind for awhile.
submitted by LostTime77 to ProgrammingLanguages [link] [comments]

Subreddit Demographic Survey 2019 : The Results

Subreddit Demographic Survey 2019

1. Introduction

Once a year, this subreddit hosts a survey in order to get to know the community a little bit and in order to answer questions that are frequently asked here. Earlier this summer, a few thousand of you participated in a massive Subreddit Demographic Survey.
Unfortunately during the process of collating results we lost contact with SailorMercure, who in previous years has completed all of the data analysis from the Google form responses. We were therefore required to collate and analyse the responses from the raw data via Excel. I attach the raw data below for those who would like to review it. For 2020 we will be rebuilding the survey from scratch.
Raw Data
Multiple areas of your life were probed: general a/s/l, education, finances, religious beliefs, marital status, etc. They are separated in 10 sections:
  1. General Demographics
  2. Education Level
  3. Career and Finances
  4. Child Status
  5. Current Location
  6. Religion and Spirituality
  7. Sexual and Romantic Life
  8. Childhood and Family Life
  9. Sterilization
  10. Childfreedom

2. Methodology

Our sample is people from this subreddit who saw that we had a survey going on and were willing to complete the survey. A weekly stickied announcement was used to alert members of the community that a survey was being run.

3. Results

5,976 participants over the course of two months at a subscriber count of 588,488 (total participant ratio of slightly >1%)

3.1 General Demographics

5,976 participants in total

Age group

Age group Participants # Percentage
18 or younger 491 8.22%
19 to 24 1820 30.46%
25 to 29 1660 27.78%
30 to 34 1107 18.52%
35 to 39 509 8.52%
40 to 44 191 3.20%
45 to 49 91 1.52%
50 to 54 54 0.90%
55 to 59 29 0.49%
60 to 64 15 0.25%
65 to 69 4 0.07%
70 to 74 2 0.03%
75 or older 3 0.05%
84.97% of the sub is under the age of 35.

Gender and Gender Identity

4,583 participants out of 5,976 (71.54%) were assigned the gender of female at birth, 1,393 (23.31%) were assigned the gender of male at birth. Today, 4,275 (70.4%) participants identify themselves as female, 1,420 (23.76%) as male, 239 (4.00%) as non binary and 42 (0.7%) as other (from lack of other options).

Sexual Orientation

Sexual Orientation Participants # Percentage
Asexual 373 6.24%
Bisexual 1,421 23.78%
Heterosexual 3,280 54.89%
Homosexual 271 4.53%
It's fluid 196 3.28%
Other 95 1.59%
Pansexual 340 5.69%

Birth Location

Because the list contains over 120 countries, we'll show the top 20 countries:
Country of birth Participants # Percentage
United States 3,547 59.35%
Canada 439 7.35%
United Kingdom 414 6.93%
Australia 198 3.31%
Germany 119 1.99%
Netherlands 72 1.20%
France 68 1.14%
Poland 66 1.10%
India 59 0.99%
Mexico 49 0.82%
New Zealand 47 0.79%
Brazil 44 0.74%
Sweden 43 0.72%
Philippines 39 0.65%
Finland 37 0.62%
Russia 34 0.57%
Ireland 33 0.55%
Denmark 31 0.52%
Norway 30 0.50%
Belgium 28 0.47%
90.31% of the participants were born in these countries.


That one was difficult for many reasons and didn't encompass all possibilities simply from lack of knowledge.
Ethnicity Participants # Percentage
Caucasian / White 4,583 76.69%
Hispanic / Latinx 332 5.56%
Multiracial 188 3.15%
East Asian 168 2.81%
Biracial 161 2.69%
African Descent / Black 155 2.59%
Indian / South Asian 120 2.01%
Other 83 1.39%
Jewish (the ethnicity, not the religion) 65 1.09%
Arab / Near Eastern / Middle Eastern 40 0.67%
American Indian or Alaskan Native 37 0.62%
Pacific Islander 24 0.40%
Aboriginal / Australian 20 0.33%

3.2 Education Level

5,976 participants in total

Current Level of Education

Highest Current Level of Education Participants # Percentage
Bachelor's degree 2061 34.49%
Some college / university 1309 21.90%
Master's degree 754 12.62%
Graduated high school / GED 721 12.06%
Associate's degree 350 5.86%
Trade / Technical / Vocational training 239 4.00%
Did not complete high school 238 3.98%
Professional degree 136 2.28%
Doctorate degree 130 2.18%
Post Doctorate 30 0.50%
Did not complete elementary school 8 0.13%

Future Education Plans

Educational Aims Participants # Percentage
I'm good where I am right now 1,731 28.97%
Master's degree 1,384 23.16%
Bachelor's degree 1,353 22.64%
Doctorate degree 639 10.69%
Vocational / Trade / Technical training 235 3.93%
Professional degree 214 3.58%
Post Doctorate 165 2.76%
Associate's degree 164 2.74%
Graduate high school / GED 91 1.52%
Of our 5,976 participants, a total of 1,576 (26.37%) returned to higher education after a break of 3+ years, the other 4,400 (73.76%) did not.
Degree (Major) Participants # Percentage
I don't have a degree or a major 1,010 16.90%
Other 580 9.71%
Health Sciences 498 8.33%
Engineering 455 7.61%
Information and Communication Technologies 428 7.16%
Arts and Music 403 6.74%
Social Sciences 361 6.04%
Business 313 5.24%
Life Sciences 311 5.20%
Literature and Languages 255 4.27%
Humanities 230 3.85%
Fundamental and Applied Sciences 174 2.91%
Teaching and Education Sciences 174 2.91%
Communication 142 2.38%
Law 132 2.21%
Economics and Politics 101 1.69%
Finance 94 1.57%
Social Sciences and Social Action 84 1.41%
Environment and Sustainable Development 70 1.17%
Marketing 53 0.89%
Administration and Management Sciences 52 0.87%
Environmental Planning and Design 24 0.40%
Fashion 18 0.30%
Theology and Religious Sciences 14 0.23%
A number of you commented in the free-form field at the end of the survey, that your degree was not present and that it wasn't related to any of the listed ones. We will try to mitigate this in the next survey!

3.3 Career and Finances

Out of the 5,976 participants, 2,199 (36.80%) work in the field they majored in, 953 (15.95%) graduated but do not work in their original field. 1,645 (27.53%) are still studying. The remaining 1,179 (19.73%) are either retired, currently unemployed or out of the workforce for unspecified reasons.
The top 10 industries our participants are working in are:
Industry Participants # Percentage
Health Care and Social Assistance 568 9.50%
Retail 400 6.69%
Arts, Entertainment, and Recreation 330 5.52%
College, University, and Adult Education 292 4.89%
Government and Public Administration 258 4.32%
Finance and Insurance 246 4.12%
Hotel and Food Services 221 3.70%
Scientific or Technical Services 198 3.31%
Software 193 3.23%
Information Services and Data Processing 169 2.83%
*Note that "other", "I'm a student" and "currently unemployed" have been disgregarded for this part of the evaluation.
Out of the 4,477 participants active in the workforce, the majority (1,632 or 36.45%) work between 40-50 hours per week, 34.73% (1,555) are working 30-40 hours weekly. Less than 6% work >50 h per week, and 23.87% (1,024 participants) less than 30 hours.
718 or 16.04% are taking over managerial responsibilities (ranging from Jr. to Sr. Management); 247 (5.52%) are self employed or partners.
On a scale of 1 (lowest) to 10 (highest), the overwhelming majority (4,009 or 67.09%) indicated that career plays a very important role in their lives, attributing a score of 7 and higher.
Only 663 (11.09%) gave it a score below 4, indicating a low importance.
The importance of climbing the career ladder is very evenly distributed across all participants and ranges in a harmonized 7-12% range for each of the 10 steps of importance.
23.71% (1,417) of the participants are making extra income with a hobby or side job.
From the 5,907 participants not already retired, the overwhelming majority of 3,608 (61.11%) does not actively seek early retirement. From those who are, most (1,024 / 17.34%) want to do so between 55-64; 7 and 11% respectively in the age brackets before or after. Less than 3.5% are looking for retirement below 45 years of age.
1,127 participants decided not to disclose their income brackets. The remaining 4,849 are distributed as follows:
Income Participants # Percentage
$0 to $14,999 1,271 26.21%
$15,000 to $29,999 800 16.50%
$30,000 to $59,999 1,441 29.72%
$60,000 to $89,999 731 15.08%
$90,000 to $119,999 300 6.19%
$120,000 to $149,999 136 2.80%
$150,000 to $179,999 67 1.38%
$180,000 to $209,999 29 0.60%
$210,000 to $239,999 22 0.45%
$240,000 to $269,999 15 0.31%
$270,000 to $299,999 5 0.10%
$300,000 or more 32 0.66%

3.4 Child Status

5,976 participants in total
94.44% of the participants (5,644) would call themselves "childfree" (as opposed to 5.56% of the participants who would not call themselves childfree. However, only 68.51% of the participants (4,094) do not have children and do not want them in any capacity at any point of the future. The other 31.49% have a varying degree of indecision, child wanting or child having on their own or their (future) spouse's part.
The 4,094 participants were made to participate in the following sections of the survey.

3.5 Current Location

4,094 childfree participants in total

Current Location

There were more than 200 options of country, so we are showing the top 10 CF countries.
Current Location Participants # Percentage
United States 2,495 60.94%
United Kingdom 331 8.09%
Canada 325 7.94%
Australia 146 3.57%
Germany 90 2.20%
Netherlands 66 1.61%
France 43 1.05%
Sweden 40 0.98%
New Zealand 33 0.81%
Poland 33 0.81%
The Top 10 amounts to 87.98% of the childfree participants' current location.

Current Location Qualification

These participants would describe their current city, town or neighborhood as:
Qualification Participants # Percentage
Urban 1,557 38.03%
Suburban 1,994 48.71%
Rural 543 13.26%

Tolerance to "Alternative Lifestyles" in Current Location

Figure 1
Figure 2
Figure 3

3.6 Religion and Spirituality

4094 childfree participants in total

Faith Originally Raised In

There were more than 50 options of faith, so we aimed to show the top 10 most chosen beliefs..
Faith Participants # Percentage
Christianity 2,624 64.09%
Atheism 494 12.07%
None (≠ Atheism. Literally, no notion of spirituality or religion in the upbringing) 431 10.53%
Agnosticism 248 6.06%
Judaism 63 1.54%
Other 45 1.10%
Hinduism 42 1.03%
Islam 40 0.98%
Buddhism 24 0.59%
Paganism 14 0.34%
This top 10 amounts to 98.3% of the 4,094 childfree participants.

Current Faith

There were more than 50 options of faith, so we aimed to show the top 10 most chosen beliefs:
Faith Participants # Percentage
Atheism 2,276 55.59%
Agnosticism 829 20.25%
Christianity 343 8.38%
Other 172 4.20%
Paganism 100 2.44%
Satanism 67 1.64%
Spiritualism 55 1.34%
Witchcraft 54 1.32%
Buddhism 43 1.05%
Judaism 30 0.73%
This top 10 amounts to 96.95% of the participants.

Level of Current Religious Practice

Level Participants # Percentage
Wholly secular / Non religious 3045 74.38%
Identify with religion, but don't practice strictly 387 9.45%
Lapsed / Not serious / In name only 314 7.67%
Observant at home only 216 5.28%
Observant at home. Church/Temple/Mosque/Etc. attendance 115 2.81%
Church/Temple/Mosque/Etc. attendance only 17 0.42%

Effect of Faith over Childfreedom

Figure 4

Effect of Childfreedom over Faith

Figure 5

3.7 Romantic and Sexual Life

4,094 childfree participants in total

Current Dating Situation

Status Participants # Percentage
Divorce 37 0.90
Engaged 215 5.25
Long term relationship, living together 758 18.51
Long term relationship, not living with together 502 12.26
Married 935 22.84
Other 69 1.69
Separated 10 0.24
Short term relationship 82 2.00
Single and dating around, but not looking for anything serious 234 5.72
Single and dating around, looking for something serious 271 6.62
Single and not looking 975 23.82
Widowed 6 0.15

Ethical Non-Monogamy

Non-monogamy (or nonmonogamy) is an umbrella term for every practice or philosophy of intimate relationship that does not strictly hew to the standards of monogamy, particularly that of having only one person with whom to exchange sex, love, and affection.
82.3% of the childfree participants do not practice ethical non-monogamy, as opposed to 17.7% who say they do.

Childfree Partner

Regarding to currently having a childfree or non childfree partner, excluding the 36.7% of childfree participants who said they do not have a partner at the moment. For this question only, only 2591 childfree participants are considered.
Partner Participants # Percentage
Childfree partner 2105 81.2%
Non childfree partner 404 9.9%
More than one partner; all childfree 53 1.3%
More than one partner; some childfree 24 0.9%
More than one partner; none childfree 5 0.2%

Dating a Single Parent

Would the childfree participants be willing to date a single parent?
Answer Participants # Percentage
No, I'm not interested in single parents and their ties to parenting life 3693 90.2
Yes, but only if it's a short term arrangement of some sort 139 3.4
Yes, whether for long term or short term, but with some conditions 161 3.9
Yes, whether for long term or short term, with no conditions 101 2.5

3.8 Childhood and Family Life

On a scale from 1 (very unhappy) to 10 (very happy), how would you rate your childhood?
Answer Participants # Percentage
1 154 3.8%
2 212 5.2%
3 433 10.6%
4 514 12.6%
5 412 10.1%
6 426 10.4%
7 629 15.4%
8 704 17.2%
9 357 8.7%
10 253 6.2%

3.9 Sterilization

4,094 childfree participants in total
Sterilization Status Participants # Percentage
No, I am not sterilized and, for medical, practical or other reasons, I do not need to be 687 16.8
No. However, I've been approved for the procedure and I'm waiting for the date to arrive 119 2.9
No. I am not sterilized and don't want to be 585 14.3
No. I want to be sterilized but I have started looking for a doctor (doctor shopping) 328 8.0
No. I want to be sterilized but I haven't started doctor shopping yet 1896 46.3
Yes. I am sterilized 479 11.7

Already Sterilized

479 sterilized childfree participants in total

Age when starting doctor shopping or addressing issue with doctor

Age group Participants # Percentage
18 or younger 37 7.7%
19 to 24 131 27.3%
25 to 29 159 33.2%
30 to 34 92 19.2%
35 to 39 47 9.8%
40 to 44 9 1.9%
45 to 49 1 0.2%
50 to 54 1 0.2%
55 or older 2 0.4%

Age at the time of sterilization

Age group Participants # Percentage
18 or younger 4 0.8%
19 to 24 83 17.3%
25 to 29 181 37.8%
30 to 34 121 25.3%
35 to 39 66 13.8%
40 to 44 17 3.5%
45 to 49 3 0.6%
50 to 54 1 0.2%
55 or older 3 0.6%

Elapsed time between requesting procedure and undergoing procedure

Time Participants # Percentage
Less than 3 months 280 58.5
Between 3 and 6 months 78 16.3
Between 6 and 9 months 20 4.2
Between 9 and 12 months 10 2.1
Between 12 and 18 months 17 3.5
Between 18 and 24 months 9 1.9
Between 24 and 30 months 6 1.3
Between 30 and 36 months 4 0.8
Between 3 and 5 years 19 4.0
Between 5 and 7 years 9 1.9
More than 7 years 27 5.6

How many doctors refused at first, before finding one who would accept?

Doctor # Participants # Percentage
None. The first doctor I asked said yes 340 71.0%
One. The second doctor I asked said yes 56 11.7%
Two. The third doctor I asked said yes 37 7.7%
Three. The fourth doctor I asked said yes 15 3.1%
Four. The fifth doctor I asked said yes 8 1.7%
Five. The sixth doctor I asked said yes 5 1.0%
Six. The seventh doctor I asked said yes 4 0.8%
Seven. The eighth doctor I asked said yes 1 0.2%
Eight. The ninth doctor I asked said yes 1 0.2%
I asked more than 10 doctors before finding one who said yes 12 2.5%

Approved, not Sterilized Yet

119 approved but not yet sterilised childfree participants in total. Owing to the zero participants who were approved but not yet sterilised in the 45+ age group in the 2018 survey, these categories were removed from the 2019 survey.

Age when starting doctor shopping or addressing issue with doctor

Age group Participants # Percentage
18 or younger 11 9.2%
19 to 24 42 35.3%
25 to 29 37 31.1%
30 to 34 23 19.3%
35 to 39 5 4.2%
40 to 45 1 0.8%

How many doctors refused at first, before finding one who would accept?

Doctor # Participants # Percentage
None. The first doctor I asked said yes 77 64.7%
One. The second doctor I asked said yes 12 10.1%
Two. The third doctor I asked said yes 12 10.1%
Three. The fourth doctor I asked said yes 5 4.2%
Four. The fifth doctor I asked said yes 2 1.7%
Five. The sixth doctor I asked said yes 4 3.4%
Six. The seventh doctor I asked said yes 1 0.8%
Seven. The eighth doctor I asked said yes 1 0.8%
Eight. The ninth doctor I asked said yes 0 0.0%
I asked more than ten doctors before finding one who said yes 5 4.2%

How long between starting doctor shopping and finding a doctor who said "Yes"?

Time Participants # Percentage
Less than 3 months 65 54.6%
3 to 6 months 13 10.9%
6 to 9 months 9 7.6%
9 to 12 months 1 0.8%
12 to 18 months 2 1.7%
18 to 24 months 2 1.7%
24 to 30 months 1 0.8%
30 to 36 months 1 0.8%
3 to 5 years 8 6.7%
5 to 7 years 6 5.0%
More than 7 years 11 9.2%

Age when receiving green light for sterilization procedure?

Age group Participants # Percentage
18 or younger 1 0.8%
19 to 24 36 30.3%
25 to 29 45 37.8%
30 to 34 27 22.7%
35 to 39 9 7.6%
40 to 44 1 0.8%

Not Sterilized Yet But Looking

328 searching childfree participants in total

How many doctors did you ask so far?

Doctor # Participants # Percentage
1 204 62.2%
2 61 18.6%
3 29 8.8%
4 12 3.7%
5 7 2.1%
6 6 1.8%
7 1 0.3%
8 1 0.3%
9 1 0.3%
More than 10 6 1.8%

How long have you been searching so far?

Time Participants # Percentage
Less than 3 months 117 35.7%
3 to 6 months 44 13.4%
6 to 9 months 14 4.3%
9 to 12 months 27 8.2%
12 to 18 months 18 5.5%
18 to 24 months 14 4.3%
24 to 30 months 17 5.2%
30 to 36 months 9 2.7%
3 to 5 years 35 10.7%
5 to 7 years 11 3.4%
More than 7 years 22 6.7%

At what age did you start searching?

Age group Participants # Percentage
18 or younger 50 15.2%
19 to 24 151 46.0%
25 to 29 86 26.2%
30 to 34 31 9.5%
35 to 39 7 2.1%
40 to 44 2 0.6%
45 to 54 1 0.3%

3.10 Childfreedom

4,094 childfree participants in total
Only 1.1% of the childfree participants (46 out of 4094) literally owns a jetski, but 46.1% of the childfree participants (1889 out of 4094) figuratively owns a jetski. A figurative jetski is an expensive material possession that purchasing would have been almost impossible had you had children.

Primary Reason to Not Have Children

Reason Participants # Percentage
Aversion towards children ("I don't like children") 1222 29.8
Childhood trauma 121 3.0
Current state of the world 87 2.1
Environmental (it includes overpopulation) 144 3.5
Eugenics ("I have "bad genes" ") 62 1.5
Financial 145 3.5
I already raised somebody else who isn't my child 45 1.1
Lack of interest towards parenthood ("I don't want to raise children") 1718 42.0
Maybe interested for parenthood, but not suited for parenthood 31 0.8
Medical ("I have a condition that makes conceiving/bearing/birthing children difficult, dangerous or lethal") 52 1.3
Other 58 1.4
Philosophical / Moral (e.g.: antinatalism) 136 3.3
Tokophobia (aversion/fear of pregnancy and/or chidlbirth) 273 6.7

4. Discussion

Section 1 : General Demographics

The demographics remain largely consistent with the 2018 survey. 85% of the participants are under 35, compared with 87.5% of the subreddit in the 2018 survey. 71.54% of the subreddit identify as female, compared with 70.4% in the 2018 survey. This is in contrast to the overall membership of Reddit, estimated at 74% male according to Reddit's Wikipedia page []. There was a marked drop in the ratio of members who identify as heterosexual, from 67.7% in the 2018 survey to 54.89% in the 2019 survey. Ethnicity wise, 77% of members identified as primarily Caucasian, a slight drop from the 2018 survey, where 79.6% of members identified as primarily Caucasian.
Further research may be useful to explore the unusually high female membership of /childfree and the potential reasons for this. It is possible that the results are skewed towards those more inclined to complete a survey.
In the 2018 survey the userbase identified the following missing ethicities:
This has been rectified in the current 2019 survey.

Section 2 : Education level

As it did in the 2018 survey, this section highlights the stereotype of childfree people as being well educated. 4% of participants did not complete high school, which is a slight increase from the 2018 survey, where 3.1% of participants did not graduate high school. This could potentially be explained by the slightly higher percentage of participants under 18. 5.6% of participants were under 18 at the time of the 2018 survey, and 8.2% of participants were under 18 at the time of the 2019 survey.
At the 2019 survey, the highest percentage of responses under the: What is your degree/major? question fell under "I don't have a degree or a major" (16.9%) and "other" (9.71%). However, of the participants who were able to select a degree and/or major, the most popular responses were:
Response Participants # Percentage
Health Sciences 498 8.33%
Engineering 455 7.61%
Information and Communication Technologies 428 7.16%
Arts and Music 403 6.74%
Social Sciences 361 6.04%
Compared to the 2018 survey, health sciences have overtaken engineering, however the top 5 majors remain the same. There is significant diversity in the subreddit with regards to chosen degree/major.

Section 3 : Career and Finances

The highest percentage of participants (17.7%) listed themselves as a student. However, of those currently working, significant diversity in chosen field of employment was noted. This is consistent with the 2018 survey. The highest percentage of people working in one of the fields listed remains in Healthcare and Social Services. This is slightly down from the 2018 survey (9.9%) to 9.5%.
One of the stereotypes of the childfree is of wealth. However this is not demonstrated in the survey results. 72.4% of participants earn under $60,000 USD per annum, while 87.5% earn under $90,000 per annum. 26.2% are earning under $15,000 per annum. The results remain largely consistent with the 2018 survey. 1127 participants, or 19% chose not to disclose this information. It is possible that this may have skewed the results if a significant proportion of these people were our high income earners, but impossible to explore.
A majority of our participants work between 30 and 50 hours per week (71.2%) which is markedly increased from the 2018 survey, where 54.6% of participants worked between 30 and 50 hours per week.

Section 4 : Child Status

This section solely existed to sift the childfree from the fencesitters and the non childfree in order to get answers only from the childfree. Childfree, as it is defined in the subreddit, is "I do not have children nor want to have them in any capacity (biological, adopted, fostered, step- or other) at any point in the future." 68.5% of participants actually identify as childfree, slightly up from the 2018 survey, where 66.3% of participants identified as childfree. This is suprising in reflection of the overall reputation of the subreddit across reddit, where the subreddit is often described as an "echo chamber".

Section 5 : Current Location

The location responses are largely similar to the 2018 survey with a majority of participants living in a suburban and urban area. 86.7% of participants in the 2019 survey live in urban and suburban regions, with 87.6% of participants living in urban and suburban regions in the 2018 survey. There is likely a multifactorial reason for this, encompassing the younger, educated skew of participants and the easier access to universities and employment, and the fact that a majority of the population worldwide localises to urban centres. There may be an element of increased progressive social viewpoints and identities in urban regions, however this would need to be explored further from a sociological perspective to draw any definitive conclusions.
A majority of our participants (60.9%) live in the USA. The United Kingdom (8.1%), Canada (7.9%), Australia (3.6%) and Germany (2.2%) encompass the next 4 most popular responses. Compared to the 2018 survey, there has been a slight drop in the USA membership (64%), United Kingdom membership (7.3%) Canadian membership (8.1%), Australian membership (3.8%). There has been a slight increase in German membership, up from 1.7%. This may reflect a growing globalisation of the childfree concept.

Section 6 : Religion and Spirituality

A majority of participants were raised Christian (64.1%) however the majority are currently aetheist (55.6%) or agnostic (20.25%). This is consistent with the 2018 survey results.
A majority of participants (62.8%) rated religion as "not at all influential" to the childfree choice. This is consistent with the 2018 survey where 60.9% rated religion as "not at all influential". Despite the high percentage of participants who identify as aetheist or agnostic, this does not appear to be related to or have an impact on the childfree choice.

Section 7 : Romantic and Sexual Life

60.7% of our participants are in a relationship at the time of the survey. This is an almost identical result to the 2018 survey, where 60.6% of our participants were in a relationship. A notable proportion of our participants are listed as single and not looking (23.8%) which is consistent with the 2018 survey. Considering the frequent posts seeking dating advice as a childfree person, it is surprising that such a high proportion of the participants are not actively seeking out a relationship.
Participants that practice ethical non-monogamy are unusual (17.7%) and this result is consistent with the results of the 2018 survey. Despite the reputuation for childfree people to live an unconventional lifestyle, this finding suggests that a majority of our participants are monogamous.
84.2% of participants with partners of some kind have at least one childfree partner. This is consistent with the often irreconcilable element of one party desiring children and the other wishing to abstain from having children.

Section 8 : Childhood and Family Life

Overall, the participants skew towards a happier childhood.

Section 9 : Sterilization

While just under half of our participants wish to be sterilised, 46.3%, only 11.7% have been successful in achieving sterilisation. This is likely due to overarching resistance from the medical profession however other factors such as the logistical elements of surgery and the cost may also contribute. This is also a decrease from the percentage of participants sterilised in the 2018 survey (14.8%). 31.1% of participants do not wish to be or need to be sterilised suggesting a partial element of satisfaction from temporary birth control methods or non-necessity from no sexual activity.
Of the participants who did achieve sterilisation, a majority began the search between 19 and 29, with the highest proportion being in the 25-29 age group (33.2%) This is a drop from the 2018 survey where 37.9% of people who started the search were between 25-29.
The majority of participants who sought out and were successful at achieving sterilisation, were again in the 25-29 age group (37.8%). This is consistent with the 2018 survey results.
Over half of the participants who were sterilised had the procedure completed in less than 3 months (58.5%). This is a decline from the number of participants who achieved sterilisation in 3 months in the 2018 survey (68%). The proportion of participants who have had one or more doctors refuse to perform the procedure has stayed consistent between the two surveys.

Section 10 : Childfreedom

The main reasons for people chosing the childfree lifestyle are a lack of interest towards parenthood and an aversion towards children. Of the people surveyed 63.8% are pet owners, suggesting that this lack of interest towards parenthood does not necessarily mean a lack of interest in all forms of caretaking. The community skews towards a dislike of children overall which correlates well with the 81.4% of users choosing "no, I do not have, did not use to have and will not have a job that makes me heavily interact with children on a daily basis" in answer to, "do you have a job that heavily makes you interact with children on a daily basis?".
A vast majority of the subreddit identifes as pro-choice (94.5%). This is likely due to a high level of concern about bodily autonomy and forced parenthood. However only 70% support financial abortion for the non-pregnant person in a relationship to sever all financial and parental ties with a child.
45.9% identify as feminist, however many users prefer to identify with egalitarianism or are unsure. Only 8% firmly do not identify as a feminist.
Most of our users realised that did not want children young. 60% of participants knew they did not want children by the age of 18, with 96% of users realising this by age 30. This correlates well with the age distribution of participants. Despite this early realisation of our childfree stance, 80.4% of participants have been "bingoed" at some stage in their lives. Only 13% of participants are opposed to parents making posts on this subreddit.
Bonus section: The Subreddit
In light of the "State of the Subreddit" survey from 2018, some of the questions from this survey were added to the current Subreddit Survey 2019.
By and large our participants were lurkers (66.17%). Our participants were divided on their favourite flairs with 33.34% selecting "I have no favourite". The next most favourite flair was "Rant", at 20.47%. Our participants were similarly divided on their least favourite flair, with 64.46% selecting "I have no least favourite". Potentially concerningly were the 42.01% of participants who selected "I have never participated on this sub", suggesting a disparity between members who contributed to this survey and members who actually participate in the subreddit. To further address this, next year's survey will clarify the "never participated" option by specifying that "never participated" means "never up/downvoting, reading posts or commenting" in addition to never posting.
A small minority of the survey participants (6.18%) selected "yes" to allowing polite, well meaning lectures. An even smaller minority (2.76%) selected "yes" to allowing angry, trolling lectures. In response to this lectures remain not tolerated, and removed on sight or on report.
Almost half of our users (49.95%) support the use of terms such as breeder, mombie/moo, daddict/duh on the subreddit, with a further 22.52% supporting use of these terms in context of bad parents only. In response to this use of the above and similar terms to describe parents remains permitted on ths subreddit.
55.3% of users support the use of terms to describe children such as crotchfruit on the subreddit, with a further 17.42% of users supporting the use of this and similar terms in context of bad children only. In response to this use of the above and similar terms to describe children remains permitted on ths subreddit.
56.03% of participants support allowing parents to post, with a further 28.77% supporting parent posts dependent on context. In response to this, parent posts will continue to be allowed on the subreddit. Furthermore 66.19% of participants support parents and non childfree making "I need your advice" posts, with a further 21.37% supporting these dependent on context. In light of these results we have decided to implement a new "regret" flair to better sort out parents from fencesitters, which will be trialed until the next subreddit survey due to concern from some of our members. 64.92% of participants support parents making "I support you guys" posts. Therefore, these will continue to be allowed.
71.03% of participants support under 18's who are childfree participating in the subreddit. Therefore we will continue to allow under 18's that stay within the overall Reddit age requirement.
We asked participants their opinion on moving Rants and Brants to a stickied weekly thread. Slightly less than half (49.73%) selected leaving them as they are in their own posts. In light of the fact that Rants are one of the participant's favourite flairs, we will leave them as they are.
There was divide among participants as to whether "newbie" questions should be removed. An even spread was noted among participants who selected remove and those who selected to leave them as is. We have therefore decided to leave them as is.

5. Conclusion

Thank you to our participants who contributed to the survey. To whoever commented, "Do I get a donut?", no you do not, but you get our appreciation for pushing through all of the questions!
Overall there have been few significant changes in the community from 2018.

Thank you also for all of your patience!

submitted by CFmoderator to childfree [link] [comments]


Boolean data type

From Wikipedia, the free encyclopedia Jump to navigation Jump to search
In computer science, the Boolean data type is a data type that has one of two possible values (usually denoted true and false) which is intended to represent the two truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional) statements, which allow different actions by changing control flow depending on whether a programmer-specified Boolean condition evaluates to true or false. It is a special case of a more general logical data type (see probabilistic logic)—logic doesn't always need to be Boolean.



In programming languages with a built-in Boolean data type, such as Pascal) and Java), the comparison operators such as > and ≠ are usually defined to return a Boolean value. Conditional and iterative commands may be defined to test Boolean-valued expressions.
Languages with no explicit Boolean data type, like C90 and Lisp), may still represent truth values by some other data type. Common Lisp uses an empty list for false, and any other value for true. The C programming language uses an integer) type, where relational expressions like i > j and logical expressions connected by && and || are defined to have value 1 if true and 0 if false, whereas the test parts of if , while , for , etc., treat any non-zero value as true.[1][2] Indeed, a Boolean variable may be regarded (and implemented) as a numerical variable with one binary digit (bit), which can store only two values. The implementation of Booleans in computers are most likely represented as a full word), rather than a bit; this is usually due to the ways computers transfer blocks of information.
Most programming languages, even those with no explicit Boolean type, have support for Boolean algebraic operations such as conjunction (AND , & , * ), disjunction (OR , | , + ), equivalence (EQV , = , == ), exclusive or/non-equivalence (XOR , NEQV , ^ , != ), and negation (NOT , ~ , ! ).
In some languages, like Ruby), Smalltalk, and Alice) the true and false values belong to separate classes), i.e., True and False , respectively, so there is no one Boolean type.
In SQL, which uses a three-valued logic for explicit comparisons because of its special treatment of Nulls), the Boolean data type (introduced in SQL:1999) is also defined to include more than two truth values, so that SQL Booleans can store all logical values resulting from the evaluation of predicates in SQL. A column of Boolean type can also be restricted to just TRUE and FALSE though.

ALGOL and the built-in boolean type

One of the earliest programming languages to provide an explicit boolean data type is ALGOL 60 (1960) with values true and false and logical operators denoted by symbols ' ∧ {\displaystyle \wedge } 📷' (and), ' ∨ {\displaystyle \vee } 📷' (or), ' ⊃ {\displaystyle \supset } 📷' (implies), ' ≡ {\displaystyle \equiv } 📷' (equivalence), and ' ¬ {\displaystyle \neg } 📷' (not). Due to input device and character set limits on many computers of the time, however, most compilers used alternative representations for many of the operators, such as AND or 'AND' .
This approach with boolean as a built-in (either primitive or otherwise predefined) data type was adopted by many later programming languages, such as Simula 67 (1967), ALGOL 68 (1970),[3] Pascal) (1970), Ada) (1980), Java) (1995), and C#) (2000), among others.


The first version of FORTRAN (1957) and its successor FORTRAN II (1958) have no logical values or operations; even the conditional IF statement takes an arithmetic expression and branches to one of three locations according to its sign; see arithmetic IF. FORTRAN IV (1962), however, follows the ALGOL 60 example by providing a Boolean data type (LOGICAL ), truth literals (.TRUE. and .FALSE. ), Boolean-valued numeric comparison operators (.EQ. , .GT. , etc.), and logical operators (.NOT. , .AND. , .OR. ). In FORMAT statements, a specific format descriptor ('L ') is provided for the parsing or formatting of logical values.[4]

Lisp and Scheme

The language Lisp) (1958) never had a built-in Boolean data type. Instead, conditional constructs like cond assume that the logical value false is represented by the empty list () , which is defined to be the same as the special atom nil or NIL ; whereas any other s-expression is interpreted as true. For convenience, most modern dialects of Lisp predefine the atom t to have value t , so that t can be used as a mnemonic notation for true.
This approach (any value can be used as a Boolean value) was retained in most Lisp dialects (Common Lisp, Scheme), Emacs Lisp), and similar models were adopted by many scripting languages, even ones having a distinct Boolean type or Boolean values; although which values are interpreted as false and which are true vary from language to language. In Scheme, for example, the false value is an atom distinct from the empty list, so the latter is interpreted as true.

Pascal, Ada, and Haskell

The language Pascal) (1970) introduced the concept of programmer-defined enumerated types. A built-in Boolean data type was then provided as a predefined enumerated type with values FALSE and TRUE . By definition, all comparisons, logical operations, and conditional statements applied to and/or yielded Boolean values. Otherwise, the Boolean type had all the facilities which were available for enumerated types in general, such as ordering and use as indices. In contrast, converting between Boolean s and integers (or any other types) still required explicit tests or function calls, as in ALGOL 60. This approach (Boolean is an enumerated type) was adopted by most later languages which had enumerated types, such as Modula, Ada), and Haskell).

C, C++, Objective-C, AWK

Initial implementations of the language C) (1972) provided no Boolean type, and to this day Boolean values are commonly represented by integers (int s) in C programs. The comparison operators (> , == , etc.) are defined to return a signed integer (int ) result, either 0 (for false) or 1 (for true). Logical operators (&& , || , ! , etc.) and condition-testing statements (if , while ) assume that zero is false and all other values are true.
After enumerated types (enum s) were added to the American National Standards Institute version of C, ANSI C (1989), many C programmers got used to defining their own Boolean types as such, for readability reasons. However, enumerated types are equivalent to integers according to the language standards; so the effective identity between Booleans and integers is still valid for C programs.
Standard C) (since C99) provides a boolean type, called _Bool . By including the header stdbool.h , one can use the more intuitive name bool and the constants true and false . The language guarantees that any two true values will compare equal (which was impossible to achieve before the introduction of the type). Boolean values still behave as integers, can be stored in integer variables, and used anywhere integers would be valid, including in indexing, arithmetic, parsing, and formatting. This approach (Boolean values are just integers) has been retained in all later versions of C. Note, that this does not mean that any integer value can be stored in a boolean variable.
C++ has a separate Boolean data type bool , but with automatic conversions from scalar and pointer values that are very similar to those of C. This approach was adopted also by many later languages, especially by some scripting languages such as AWK.
Objective-C also has a separate Boolean data type BOOL , with possible values being YES or NO , equivalents of true and false respectively.[5] Also, in Objective-C compilers that support C99, C's _Bool type can be used, since Objective-C is a superset of C.

Perl and Lua

Perl has no boolean data type. Instead, any value can behave as boolean in boolean context (condition of if or while statement, argument of && or || , etc.). The number 0 , the strings "0" and "" , the empty list () , and the special value undef evaluate to false.[6] All else evaluates to true.
Lua) has a boolean data type, but non-boolean values can also behave as booleans. The non-value nil evaluates to false, whereas every other data type always evaluates to true, regardless of value.


Tcl has no separate Boolean type. Like in C, the integers 0 (false) and 1 (true - in fact any nonzero integer) are used.[7]
Examples of coding:
set v 1 if { $v } { puts "V is 1 or true" }
The above will show "V is 1 or true" since the expression evaluates to '1'
set v "" if { $v } ....
The above will render an error as variable 'v' cannot be evaluated as '0' or '1'

Python, Ruby, and JavaScript

Python), from version 2.3 forward, has a bool type which is a subclass) of int , the standard integer type.[8] It has two possible values: True and False , which are special versions of 1 and 0 respectively and behave as such in arithmetic contexts. Also, a numeric value of zero (integer or fractional), the null value (None ), the empty string), and empty containers (i.e. lists), sets), etc.) are considered Boolean false; all other values are considered Boolean true by default.[9] Classes can define how their instances are treated in a Boolean context through the special method __nonzero__ (Python 2) or __bool__ (Python 3). For containers, __len__ (the special method for determining the length of containers) is used if the explicit Boolean conversion method is not defined.
In Ruby), in contrast, only nil (Ruby's null value) and a special false object are false, all else (including the integer 0 and empty arrays) is true.
In JavaScript, the empty string ("" ), null , undefined , NaN , +0, −0 and false [10] are sometimes called falsy (of which the complement) is truthy) to distinguish between strictly type-checked and coerced Booleans.[11] As opposed to Python, empty containers (arrays , Maps, Sets) are considered truthy. Languages such as PHP also use this approach.

Next Generation Shell

Next Generation Shell, has Bool type. It has two possible values: true and false . Bool is not interchangeable with Int and have to be converted explicitly if needed. When a Boolean value of an expression is needed (for example in if statement), Bool method is called. Bool method for built-in types is defined such that it returns false for a numeric value of zero, the null value, the empty string), empty containers (i.e. lists), sets), etc.), external processes that exited with non-zero exit code; for other values Bool returns true. Types for which Bool method is defined can be used in Boolean context. When evaluating an expression in Boolean context, If no appropriate Bool method is defined, an exception is thrown.


Main article: Null (SQL) § Comparisons with NULL and the three-valued logic (3VL)#Comparisonswith_NULL_and_the_three-valued_logic(3VL))
Booleans appear in SQL when a condition is needed, such as WHERE clause, in form of predicate which is produced by using operators such as comparison operators, IN operator, IS (NOT) NULL etc. However, apart from TRUE and FALSE, these operators can also yield a third state, called UNKNOWN, when comparison with NULL is made.
The treatment of boolean values differs between SQL systems.
For example, in Microsoft SQL Server, boolean value is not supported at all, neither as a standalone data type nor representable as an integer. It shows an error message "An expression of non-boolean type specified in a context where a condition is expected" if a column is directly used in the WHERE clause, e.g. SELECT a FROM t WHERE a , while statement such as SELECT column IS NOT NULL FROM t yields a syntax error. The BIT data type, which can only store integers 0 and 1 apart from NULL, is commonly used as a workaround to store Boolean values, but workarounds need to be used such as UPDATE t SET flag = IIF(col IS NOT NULL, 1, 0) WHERE flag = 0 to convert between the integer and boolean expression.
In PostgreSQL, there is a distinct BOOLEAN type as in the standard[12] which allows predicates to be stored directly into a BOOLEAN column, and allows using a BOOLEAN column directly as a predicate in WHERE clause.
In MySQL, BOOLEAN is treated as an alias as TINYINT(1)[13], TRUE is the same as integer 1 and FALSE is the same is integer 0.[14], and treats any non-zero integer as true when evaluating conditions.
The SQL92 standard introduced IS (NOT) TRUE, IS (NOT) FALSE, IS (NOT) UNKNOWN operators which evaluate a predicate, which predated the introduction of boolean type in SQL:1999
The SQL:1999 standard introduced a BOOLEAN data type as an optional feature (T031). When restricted by a NOT NULL constraint, a SQL BOOLEAN behaves like Booleans in other languages, which can store only TRUE and FALSE values. However, if it is nullable, which is the default like all other SQL data types, it can have the special null) value also. Although the SQL standard defines three literals) for the BOOLEAN type – TRUE, FALSE, and UNKNOWN – it also says that the NULL BOOLEAN and UNKNOWN "may be used interchangeably to mean exactly the same thing".[15][16] This has caused some controversy because the identification subjects UNKNOWN to the equality comparison rules for NULL. More precisely UNKNOWN = UNKNOWN is not TRUE but UNKNOWN/NULL.[17] As of 2012 few major SQL systems implement the T031 feature.[18] Firebird and PostgreSQL are notable exceptions, although PostgreSQL implements no UNKNOWN literal; NULL can be used instead.[19]

See also

Data typesUninterpreted
Related topics


  1. "PostgreSQL: Documentation: 10: 8.6. Boolean Type". Archived from the original on 9 March 2018. Retrieved 1 May 2018.

Navigation menu






Edit links

submitted by finnarfish to copypasta [link] [comments]

Reddcoin (RDD) 02/20 Progress Report - Core Wallet v3.1 Evolution & PoSV v2 - Commits & More Commits to v3.1! (Bitcoin Core 0.10, MacOS Catalina, QT Enhanced Speed and Security and more!)

Reddcoin (RDD) Core Dev Team Informal Progress Report, Feb 2020 - As any blockchain or software expert will confirm, the hardest part of making successful progress in blockchain and crypto is invisible to most users. As developers, the Reddcoin Core team relies on internal experts like John Nash, contributors offering their own code improvements to our repos (which we would love to see more of!) and especially upstream commits from experts working on open source projects like Bitcoin itself. We'd like tothank each and everyone who's hard work has contributed to this progress.
As part of Reddcoin's evolution, and in order to include required security fixes, speed improvements that are long overdue, the team has up to this point incorporated the following code commits since our last v3.0.1 public release. In attempting to solve the relatively minor font display issue with MacOS Catalina, we uncovered a complicated interweaving of updates between Reddcoin Core, QT software, MacOS SDK, Bitcoin Core and related libraries and dependencies that mandated we take a holistic approach to both solve the Catalina display problem, but in doing so, prepare a more streamlined overall build and test system, allowing the team to roll out more frequent and more secure updates in the future. And also to include some badly needed fixes in the current version of Core, which we have tentatively labeled Reddcoin Core Wallet v3.1.
Note: As indicated below, v3.1 is NOT YET AVAILABLE FOR DOWNLOAD BY PUBLIC. We wil advise when it is.
The new v3.1 version should be ready for internal QA and build testing by the end of this week, with luck, and will be turned over to the public shortly thereafter once testing has proven no unexpected issues have been introduced. We know the delay has been a bit extended for our ReddHead MacOS Catalina stakers, and we hope to have them all aboard soon. We have moved with all possible speed while attempting to incorproate all the required work, testing, and ensuring security and safety for our ReddHeads.
Which leads us to: PoSV v2 activation and the supermajority on Mainnet at the time of this writing has reached 5625/9000 blocks or 62.5%. We have progressed quite well and without any reported user issues since release, but we need all of the community to participate! This activation, much like the funding mechanisms currently being debated by BCH and others, and employed by DASH, will mean not only a catalyst for Reddcoin but ensure it's future by providing funding for the dev team. As a personal plea from the team, please help us support the PoSV v2 activation by staking your RDD, no matter how large or small your amount of stake.
Every block and every RDD counts, and if you don't know how, we'll teach you! Live chat is fun as well as providing tech support you can trust from devs and community ReddHead members. Join us today in staking and online and collect some RDD "rain" from users and devs alike!
If you're holding Reddcoin and not staking, or you haven't upgraded your v2.x wallet to v3.0.1 (current release), we need you to help achieve consensus and activate PoSV v2! For details, see the pinned message here or our website or medium channel. Upgrade is simple and takes moments; if you're nervous or unsure, we're here to help live in Telegram or Discord, as well as other chat programs. See our website for links.
Look for more updates shortly as our long-anticipated Reddcoin Payment Gateway and Merchant Services API come online with point-of-sale support, as we announce the cross-crypto-project Aussie firefighter fundraiser program, as well as a comprehensive update to our development roadmap and more.
Work has restarted on ReddID and multiple initiatives are underway to begin educating and sharing information about ReddID, what it is, and how to use it, as we approach a releasable ReddID product. We enthusiastically encourage anyone interested in working to bring these efforts to life, whether writers, UX/UI experts, big data analysts, graphic artists, coders, front-end, back-end, AI, DevOps, the Reddcoin Core dev team is growing, and there's more opportunity and work than ever!
Bring your talents to a community and dev team that truly appreciates it, and share the Reddcoin Love!
And now, lots of commits. As v3.1 is not yet quite ready for public release, these commits have not been pushed publicly, but in the interests of sharing progress transparently, and including our ReddHead community in the process, see below for mind-numbing technical detail of work accomplished.
e5c143404 - - 2014-08-07 - Ross Nicoll - Changed LevelDB cursors to use scoped pointers to ensure destruction when going out of scope. *99a7dba2e - - 2014-08-15 - Cory Fields - tests: fix test-runner for osx. Closes ##4708 *8c667f1be - - 2014-08-15 - Cory Fields - build: add to the list of meta-depends *bcc1b2b2f - - 2014-08-15 - Cory Fields - depends: fix shasum on osx < 10.9 *54dac77d1 - - 2014-08-18 - Cory Fields - build: add option for reducing exports (v2) *6fb9611c0 - - 2014-08-16 - randy-waterhouse - build : fix CPPFLAGS for libbitcoin_cli *9958cc923 - - 2014-08-16 - randy-waterhouse - build: Add --with-utils (bitcoin-cli and bitcoin-tx, default=yes). Help string consistency tweaks. Target sanity check fix. *342aa98ea - - 2014-08-07 - Cory Fields - build: fix automake warnings about the use of INCLUDES *46db8ad51 - - 2020-02-18 - John Nash - build: add build.h to the correct target *a24de1e4c - - 2014-11-26 - Pavel Janík - Use complete path to include bitcoin-config.h. *fd8f506e5 - - 2014-08-04 - Wladimir J. van der Laan - qt: Demote ReportInvalidCertificate message to qDebug *f12aaf3b1 - - 2020-02-17 - John Nash - build: QT5 compiled with fPIC require fPIC to be enabled, fPIE is not enough *7a991b37e - - 2014-08-12 - Wladimir J. van der Laan - build: check for sys/prctl.h in the proper way *2cfa63a48 - - 2014-08-11 - Wladimir J. van der Laan - build: Add mention of --disable-wallet to bdb48 error messages *9aa580f04 - - 2014-07-23 - Cory Fields - depends: add shared dependency builder *8853d4645 - - 2014-08-08 - Philip Kaufmann - [Qt] move SubstituteFonts() above ToolTipToRichTextFilter *0c98e21db - - 2014-08-02 - Ross Nicoll - URLs containing a / after the address no longer cause parsing errors. *7baa77731 - - 2014-08-07 - ntrgn - Fixes ignored qt 4.8 codecs path on windows when configuring with --with-qt-libdir *2a3df4617 - - 2014-08-06 - Cory Fields - qt: fix unicode character display on osx when building with 10.7 sdk *71a36303d - - 2014-08-04 - Cory Fields - build: fix race in 'make deploy' for windows *077295498 - - 2014-08-04 - Cory Fields - build: Fix 'make deploy' when binaries haven't been built yet *ffdcc4d7d - - 2014-08-04 - Cory Fields - build: hook up qt translations for static osx packaging *25a7e9c90 - - 2014-08-04 - Cory Fields - build: add --with-qt-translationdir to configure for use with static qt *11cfcef37 - - 2014-08-04 - Cory Fields - build: teach macdeploy the -translations-dir argument, for use with static qt *4c4ae35b1 - - 2014-07-23 - Cory Fields - build: Find the proper xcb/pcre dependencies *942e77dd2 - - 2014-08-06 - Cory Fields - build: silence mingw fpic warning spew *e73e2b834 - - 2014-06-27 - Huang Le - Use async name resolving to improve net thread responsiveness *c88e76e8e - - 2014-07-23 - Cory Fields - build: don't let libtool insert rpath into binaries *18e14e11c - - 2014-08-05 - ntrgn - build: Fix windows configure when using --with-qt-libdir *bb92d65c4 - - 2014-07-31 - Cory Fields - test: don't let the port number exceed the legal range *62b95290a - - 2014-06-18 - Cory Fields - test: redirect comparison tool output to stdout *cefe447e9 - - 2014-07-22 - Cory Fields - gitian: remove unneeded option after last commit *9347402ca - - 2014-07-21 - Cory Fields - build: fix broken boost chrono check on some platforms *c9ed039cf - - 2014-06-03 - Cory Fields - build: fix whitespace in pkg-config variable *3bcc5ad37 - - 2014-06-03 - Cory Fields - build: allow linux and osx to build against static qt5 *01a44ba90 - - 2014-07-17 - Cory Fields - build: silence false errors during make clean *d1fbf7ba2 - - 2014-07-08 - Cory Fields - build: fix win32 static linking after libtool merge *005ae2fa4 - - 2014-07-08 - Cory Fields - build: re-add AM_LDFLAGS where it's overridden *37043076d - - 2014-07-02 - Wladimir J. van der Laan - Fix the Qt5 build after d95ba75 *f3b4bbf40 - - 2014-07-01 - Wladimir J. van der Laan - qt: Change serious messages from qDebug to qWarning *f4706f753 - - 2014-07-01 - Wladimir J. van der Laan - qt: Log messages with type>QtDebugMsg as non-debug *98e85fa1f - - 2014-06-06 - Pieter Wuille - libsecp256k1 integration *5f1f2e226 - - 2020-02-17 - John Nash - Merge branch 'switch_verification_code' into Build *1f30416c9 - - 2014-02-07 - Pieter Wuille - Also switch the (unused) verification code to low-s instead of even-s. *1c093d55e - - 2014-06-06 - Cory Fields - secp256k1: Add build-side changes for libsecp256k1 *7f3114484 - - 2014-06-06 - Cory Fields - secp256k1: add libtool as a dependency *2531f9299 - - 2020-02-17 - John Nash - Move network-time related functions to timedata.cpp/h *d003e4c57 - - 2020-02-16 - John Nash - build: fix build weirdness after 54372482. *7035f5034 - - 2020-02-16 - John Nash - Add ::OUTPUT_SIZE *2a864c4d8 - - 2014-06-09 - Cory Fields - crypto: create a separate lib for crypto functions *03a4e4c70 - - 2014-06-09 - Cory Fields - crypto: explicitly check for byte read/write functions *a78462a2a - - 2014-06-09 - Cory Fields - build: move bitcoin-config.h to its own directory *a885721c4 - - 2014-05-31 - Pieter Wuille - Extend and move all crypto tests to crypto_tests.cpp *5f308f528 - - 2014-05-03 - Pieter Wuille - Move {Read,Write}{LE,BE}{32,64} to common.h and use builtins if possible *0161cc426 - - 2014-05-01 - Pieter Wuille - Add built-in RIPEMD-160 implementation *deefc27c0 - - 2014-04-28 - Pieter Wuille - Move crypto implementations to src/crypto/ *d6a12182b - - 2014-04-28 - Pieter Wuille - Add built-in SHA-1 implementation. *c3c4f9f2e - - 2014-04-27 - Pieter Wuille - Switch miner.cpp to use sha2 instead of OpenSSL. *b6ed6def9 - - 2014-04-28 - Pieter Wuille - Remove getwork() RPC call *0a09c1c60 - - 2014-04-26 - Pieter Wuille - Switch script.cpp and hash.cpp to use sha2.cpp instead of OpenSSL. *8ed091692 - - 2014-04-20 - Pieter Wuille - Add a built-in SHA256/SHA512 implementation. *0c4c99b3f - - 2014-06-21 - Philip Kaufmann - small cleanup in src/compat .h and .cpp *ab1369745 - - 2014-06-13 - Cory Fields - sanity: hook up sanity checks *f598c67e0 - - 2014-06-13 - Cory Fields - sanity: add libc/stdlib sanity checks *b241b3e13 - - 2014-06-13 - Cory Fields - sanity: autoconf check for sys/select.h *cad980a4f - - 2019-07-03 - John Nash - build: Add a top-level forwarding target for src/ objects *f4533ee1c - - 2019-07-03 - John Nash - build: qt: split locale resources. Fixes non-deterministic distcheck *4a0e46e76 - - 2019-06-29 - John Nash - build: fix version dependency *2f61699d9 - - 2019-06-29 - John Nash - build: quit abusing AMCPPFLAGS *99b60ba49 - - 2019-06-29 - John Nash - build: avoid the use of top and abs_ dir paths *c8f673d5d - - 2019-06-29 - John Nash - build: Tidy up file generation output *5318bce57 - - 2019-06-29 - John Nash - build: nuke Makefile.include from orbit *672a25349 - - 2019-06-29 - John Nash - build: add stub makefiles for easier subdir builds *562b7c5a6 - - 2020-02-08 - John Nash - build: delete old's *066120079 - - 2020-02-08 - John Nash - build: Switch to non-recursive make
Whew! No wonder it's taken the dev team a while! :)
TL;DR: Trying to fix MacOS Catalina font display led to requiring all kinds of work to migrate and evolve the Reddcoin Core software with Apple, Bitcoin and QT components. Lots of work done, v3.1 public release soon. Also other exciting things and ReddID back under active dev effort.
submitted by TechAdept to reddCoin [link] [comments]

dcrd Version 1.5.0 Release Candidate 1

Release Candidates are public previews of software that are functional and nearing release, but still require testing to catch any potential issues. If you are an adventurous individual who is willing to help test and report any issues, please do so. However, be aware that running pre-release software may require a downgrade and/or redownload of the chain in extreme cases

CLI Binaries:

dcrd v1.5.0-rc1

This release of dcrd introduces a large number of updates. Some of the key highlights are:
For those unfamiliar with the voting process in Decred, all code in order to support block header commitments is already included in this release, however its enforcement will remain dormant until the stakeholders vote to activate it.
For reference, block header commitments were originally proposed and approved for initial implementation via the following Politeia proposal:
The following Decred Change Proposal (DCP) describes the proposed changes in detail and provides a full technical specification:

Downgrade Warning

The database format in v1.5.0 is not compatible with previous versions of the software. This only affects downgrades as users upgrading from previous versions will see a one time database migration.
Once this migration has been completed, it will no longer be possible to downgrade to a previous version of the software without having to delete the database and redownload the chain.

Notable Changes

Block Header Commitments Vote

A new vote with the id headercommitments is now available as of this release. After upgrading, stakeholders may set their preferences through their wallet or Voting Service Provider's (VSP) website.
The primary goal of this change is to increase the security and efficiency of lightweight clients, such as Decrediton in its lightweight mode and the dcrandroid/dcrios mobile wallets, as well as add infrastructure that paves the way for several future scalability enhancements.
A high level overview aimed at a general audience including a cost benefit analysis can be found in the Politeia proposal.
In addition, a much more in-depth treatment can be found in the motivation section of DCP0005.

Version 2 Block Filters

The block filters used by lightweight clients, such as SPV (Simplified Payment Verification) wallets, have been updated to improve their efficiency, ergonomics, and include additional information such as the full ticket commitment script. The new block filters are version 2. The older version 1 filters are now deprecated and scheduled to be removed in the next release, so consumers should update to the new filters as soon as possible.
An overview of block filters can be found in the block filters section of DCP0005.
Also, the specific contents and technical specification of the new version 2 block filters is available in the version 2 block filters section of DCP0005.
Finally, there is a one time database update to build and store the new filters for all existing historical blocks which will likely take a while to complete (typically around 8 to 10 minutes on HDDs and 4 to 5 minutes on SSDs).

Mining Infrastructure Overhaul

The mining infrastructure for building block templates and delivering the work to miners has been significantly overhauled to improve several aspects as follows:
The standard getwork RPC that PoW miners currently use to perform the mining process has been updated to make use of this new infrastructure, so existing PoW miners will seamlessly get the vast majority of benefits without requiring any updates.
However, in addition, a new notifywork RPC is now available that allows miners to register for work to be delivered asynchronously as it becomes available via a WebSockets work notification. These notifications include the same information that getwork provides along with an additional reason parameter which allows the miners to make better decisions about when they should instruct workers to discard the current template immediately or should be allowed to finish their current round before being provided with the new template.
Miners are highly encouraged to update their software to make use of the new asynchronous notification infrastructure since it is more robust, efficient, and faster than polling getwork to manually determine the aforementioned conditions.
The following is a non-exhaustive overview that highlights the major benefits of the changes for both cases:
PoW miners who choose to update their software, pool or otherwise, to make use of the asynchronous work notifications will receive additional benefits such as:
NOTE: Miners that are not rolling the timestamp field as they mine should ensure their software is upgraded to roll the timestamp to the latest timestamp each time they hand work out to a miner. This helps ensure the block timestamps are as accurate as possible.

Transaction Script Validation Optimizations

Transaction script validation has been almost completely rewritten to significantly improve its speed and reduce the number of memory allocations. While this has many more benefits than enumerated here, probably the most important ones for most stakeholders are:

Automatic External IP Address Discovery

In order for nodes to fully participate in the peer-to-peer network, they must be publicly accessible and made discoverable by advertising their external IP address. This is typically made slightly more complicated since most users run their nodes on networks behind Network Address Translation (NAT).
Previously, in addition to configuring the network firewall and/or router to allow inbound connections to port 9108 and forwarding the port to the internal IP address running dcrd, it was also required to manually set the public external IP address via the --externalip CLI option.
This release will now make use of other nodes on the network in a decentralized fashion to automatically discover the external IP address, so it is no longer necessary to manually set CLI option for the vast majority of users.

Tor IPv6 Support

It is now possible to resolve and connect to IPv6 peers over Tor in addition to the existing IPv4 support.

RPC Server Changes

New Version 2 Block Filter Query RPC (getcfilterv2)

A new RPC named getcfilterv2 is now available which can be used to retrieve the version 2 block filter for a given block along with its associated inclusion proof. See the getcfilterv2 JSON-RPC API Documentation for API details.

New Network Information Query RPC (getnetworkinfo)

A new RPC named getnetworkinfo is now available which can be used to query information related to the peer-to-peer network such as the protocol version, the local time offset, the number of current connections, the supported network protocols, the current transaction relay fee, and the external IP addresses for the local interfaces. See the getnetworkinfo JSON-RPC API Documentation for API details.

Updates to Chain State Query RPC (getblockchaininfo)

The difficulty field of the getblockchaininfo RPC is now deprecated in favor of a new field named difficultyratio which matches the result returned by the getdifficulty RPC.
See the getblockchaininfo JSON-RPC API Documentation for API details.

New Optional Version Parameter on Script Decode RPC (decodescript)

The decodescript RPC now accepts an additional optional parameter to specify the script version. The only currently supported script version in Decred is version 0 which means decoding scripts with versions other than 0 will be seen as non standard.

Removal of Deprecated Block Template RPC (getblocktemplate)

The previously deprecated getblocktemplate RPC is no longer available. All known miners are already using the preferred getwork RPC since Decred's block header supports more than enough nonce space to keep mining hardware busy without needing to resort to building custom templates with less efficient extra nonce coinbase workarounds.

Additional RPCs Available To Limited Access Users

The following RPCs that were previously unavailable to the limited access RPC user are now available to it:

Single Mining State Request

The peer-to-peer protocol message to request the current mining state (getminings) is used when peers first connect to retrieve all known votes for the current tip block. This is only useful when the peer first connects because all future votes will be relayed once the connection has been established. Consequently, nodes will now only respond to a single mining state request. Subsequent requests are ignored.

Developer Go Modules

A full suite of versioned Go modules (essentially code libraries) are now available for use by applications written in Go that wish to create robust software with reproducible, verifiable, and verified builds.
These modules are used to build dcrd itself and are therefore well maintained, tested, documented, and relatively efficient.


This release consists of 600 commits from 17 contributors which total to 537 files changed, 41494 additional lines of code, and 29215 deleted lines of code.
All commits since the last release may be viewed on GitHub here.

Protocol and network:

Transaction relay (memory pool):



dcrd command-line flags and configuration:

certgen utility changes:

dcrctl utility changes:

promptsecret utility changes:


Developer-related package and module changes:

...continued in a separate post since it exceeds per-post limits.
submitted by davecgh to decred [link] [comments]

New STEALTH X Indicator and VIP Package - Best Binary Options Tradings 90%+ Accurate! Binary Option, Scalping VIP Indicator! Semi Automated, Best algorithmic trend tool Best Binomo - Binary option - MT4 Indicator// Trading Signal Software// Free Download  2020 YOGA ARROW Indicator- Best Binary Options Tradings Binary Options - MT4 Indicators  Real Account  IQ Option

Binary Arrow Indicator – simplest indicator for Binary Options trading.. RED arrow = PUT Option GREEN arrow = CALL Option. Example of Chart with Binary Arrow indicator. There aren’t too many signals and therefore traders need to have multiple 5 minute currency charts of different pairs in order to reap the maximum benefits from this strategy. Dec 22, 2016 · A Binary-Options strategy has to call a function of the Binary-Options-Strategy-Tester (via Binary-Options-Strategy-Library) to cci indicatorexplained in binary options place the virtual trades. Boundary options: For binary options traders, dojis and long legged dojis offer the opportunity to win a trade. The software belongs to Business Tools. The most popular versions among Binary Option Robot users are 1.9, 1.8 and 1.7. This free software is a product of Binary Options Robot. This free PC software was developed to work on Windows XP, Windows Vista, Windows 7, Windows 8 or Windows 10 and is compatible with 32-bit systems. Page 1 of 3 - Bill Willians Arrow (90% success) - posted in Metatrader 4 Indicators (MT4): Hi friends, and as always apologize for my English. I found this indicator have been testing and NEVER repaint. Begins at the beginning of the candle (more or lees) (better temporalities from 5 minutes). It has a very high success rate. If anyone can put an alarm to sound would be great.regards is a Portal for Traders with a variety of trading tools (Forex and Binary Options Indicators, Trading Systems and Strategies for different trading styles, and also Expert Advisors) that can be downloaded absolutely free. On the website contains Indicators and Trading Systems for Forex and Binary Options. We regularly supplement our collection of trading tools

[index] [6978] [6100] [18334] [26798] [5362] [13865] [5286] [17401] [19278] [26779]

New STEALTH X Indicator and VIP Package - Best Binary Options Tradings

The best binary option indicator 2018 . This is a video for binary options traders who need a high win rate iq option strategy . Indicators that work in binary options are moving averages. Tags: using mt4 on binary options iq option mt4 trading metatrader 4 mt4 indicators binary option - FOR MORE STRATEGIES, TOOLS, and INDICATORS- Visit our site at: IQ Option Indicators IQ Option Robot ... Binary Options Strategy Binary Options Method Binary Options Signals Binary Options Robot Binary Options Indicators Forex Strategy ... Developers; Terms ... This binary options indicator will give you the edge in the financial markets, It is unique and accurate. This binary options trading strategy is all you need to succeed in binary options. Dear Viewers, this is a most powerful strategy. This is 100% risk Free Strategy Totally free without any hidden charges. Subscribe and email to me [email protected] you will get this strategy ...

Flag Counter