journals-feed-atom

<?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> <title>Tactical Typos</title> <link href="http://example.org/"/> <updated>2019-05-25T08:16:34Z</updated> <author> <name>Brandon Hall</name> </author> <id>urn:uuid:60a76c80-d399-11d9-b93C-0003939e0af6</id> <entry> <title>3rd-May-2019</title> <link href="3rd-May-2019.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-05-25T08:16:34Z</updated> <published>2019-05-03T15:21:39Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> The list of [[libraries that Mode supports|https://mode.com/help/articles/notebook/#supported-libraries]] makes for a good list of libraries and directions in DataAnalysis that one could/should check out. For instance, prior to seeing this page I hadn't heard of the notion of "defensive data analysis" ([[see Engarde|https://engarde.readthedocs.io/en/latest/index.html]]) and I hadn't seen a description of survival analysis that was nearly [[this good|https://lifelines.readthedocs.io/en/latest/Survival%20Analysis%20intro.html]] (from the Lifelines package). --- Last Friday and Saturday I was wondering about whether if a dataset excludes a median statistic but provides `max`, `min`, and `avg`, one could still use the midpoint of max and min (`(max + min)/2`) as a stand-in for the median, comparing this midpoint with the avg to determine whether the distribution considered is skewed. I suggested to Ryan that it might be possible for one to use a measure of (`Avg/Midpoint`) to decide whether a given average (avg mean) is above or below the median, but I'd need to test this by running simulations (my first instance of realizing if/when simulations might be run and why!) with known medians. I was considering this after looking over a salary guide/report provided by the recruiting agency that I've been working with, [[Accounting Principals|http://accountingprincipals.com]]. The guide provided only upper bounds, lower bounds, and averages for salaries across many positions. I was worried that the averages would be skewed, so I got to thinking if/how I might be able to use the other data that is given to determine whether it is indeed skewed. The above formula was what I came up with for comparison (`Avg/((max + min)/2)`), but I'm still not certain whether it'd behave in the way that I'm hoping it does, either always or a decent % of the time. Edit: May 23, 2019: I think that this test would fail in precisely the cases I hope to use it to detect, cases in which there's skewness </div> </content> </entry> <entry> <title>16th-April-2019</title> <link href="16th-April-2019.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-04-16T05:23:50Z</updated> <published>2019-04-16T04:26:40Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> People's behavior is at least partially determined by the information that they have/are receiving. [Behavior as a function of information + other variables] --- That a researcher is also looking for PDF-to-text tools for text analysis: https://askubuntu.com/a/344080 Calibre might offer the best, general method of extracting text from PDFs. --- Relative environmental costs/benefits (social? ScopeOfApplication between Social/Private in [[Economics]], EnvironmentalEcon) in terms of freezing veggies to lock in nutrients versus canning [it was suggested to me by a friend that canning removes nutrients from vegetables whereas freezing them does not]. However, is the benefit conferred to an individual that consumes frozen veggies, from the added nutrients, worth the cost of maintaining the frozen state of veggies? [Could change according to changes in the efficiency of freezing and temperature control technology, such as [[passive heaters/coolers that are well insulated via vacuum technology|https://www.lowtechmagazine.com/2014/07/cooking-pot-insulation-key-to-sustainable-cooking.html]]; the initial canning process may be more energy-intensive than freezing and maintenance of a frozen state] --- Discussion(s) above entails the consideration of functions and variables involved with them. It would be nice to have some manner of perceiving and updating connections between these. [ProjectIdeas, [[Ideas]], [[FunctionalNotation]]] --- Looking into Logit and Probit statistical methods! ...And how they differ from linear regressions! https://stats.stackexchange.com/a/30909 talks about the meat of linear models. + there's other insightful responses to the question that prompted it ("Difference between logit and probit models"). Did you know that "Probit" means ''prob''ability un''it''? Does "Logit" then mean logical unit? Apparently not; rather, it deals with logarithms: "the logit function gives the log-odds, or the logarithm of the odds" ([[Wiki article|https://en.wikipedia.org/wiki/Logit]]). I'm looking into this for a couple of reasons. Firstly because Ryan and I were recently talking about it. Secondly, I haven't been exposed to much non-linear regression thinking/methodology. I'm aware of the case in which one determines the probability that a dependent variable is `True` given various independent variables, which I think is [[Probit]], but I'm uncertain. And I'm quite certain that this would be a good/useful thing to know. Thirdly, I saw the post "[[Linear Regression in Python|https://realpython.com/linear-regression-in-python/]]" on Planet Python, which reminded me that I should delve into learning something that doesn't involve linear regressions. Oh and on the DataFramed podcast, whose archives I've been listening to, the speakers have mentioned a couple of times that using "logistic regression" ("Logit," right?) is often either sufficient enough for many applications of statistics/DataScience within business versus MachineLearning or it's more efficient/[[marginally|MarginalAnalysis]] insightful. --- On the mention of "marginally" above, I think that I'll now make a page for MarginalAnalysis that I can link to when I mean to use the word "marginal" in the way that the term is used in economics. Ryan and I were discussing that it can be very confusing to people that are more familiar with the conventional usage of the term "marginal", which suggests that a given thing --- the marginal thing --- is insignificant or inconsequential. In economics, *marginal* talks about the addition of one unit of something to a whole. For example, the addition of the integer one (`1`) to a different integer one (`1`) leaves you with a group that includes two ones. [Funny, I had attempted to use this to discuss rates of change via adding one, but in attempting to structure this statement I'm wondering again about how numbers "work". I've here "formed" a set that has a "length" of two, as it includes two "objects" (can you tell that my time spent with [[Python]] has changed me?). But when I'm attempting to add the integers within the set (`1` and `1`) together, I appear to be doing something different than simply getting the length of the set. (Perhaps I'm mostly being confused and amused by that in this case the length of the set happens to equal what would also be the sum of the integers contained within the set.) But the addition seems to involve some additional process being done over the set, or maybe conducting some "operation" or behavior that involves all of the objects/elements/items contained within the set in question.] --- Candidacy for refactoring within a Wiki system, or any system that involves a notion of canonical content. How to determine this? --- Data as asset. Data as liability. </div> </content> </entry> <entry> <title>6th-April-2019: Econometrics</title> <link href="6th-April-2019%253A%2520Econometrics.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-04-06T11:32:28Z</updated> <published>2019-04-06T11:29:31Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> * Need to revisit the notion of Unit Roots. How to explain it? ** What's its relation to (non-)stationarity? * Revisit my research project's final draft to extract more topics </div> </content> </entry> <entry> <title>5th-April-2019: PublicGoodFinance</title> <link href="5th-April-2019%253A%2520PublicGoodFinance.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-04-05T06:32:37Z</updated> <published>2019-04-05T06:15:33Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Liberapay uses units of a week, not a day. We can use this to our benefit. Week v. month: week lets funds be more regularly spread, less dramatic. "PseudoContracts" for cases in which money has been granted to someone "Funding horizon": forward-looking schedule (time series) over which one's present levels of funding __will__ expire. (Note that this does not integrate a predictive element. Wheresoever a predictive element is provided, any visualizations of the funding horizon should clearly distinguish between funding amounts that __will expire__ and those that are predicted (will ''likely'' be renewed; in the case of when current funding is predicted to be renewed, one should nonetheless visually and conceptually represent them (1) current funding and (2) predicted funding as separate matters). (The key concern here is making it explicit that there is uncertainty.) </div> </content> </entry> <entry> <title>22nd-March-2019: Pandoc and ePub</title> <link href="22nd-March-2019%253A%2520Pandoc%2520and%2520ePub.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-22T04:34:50Z</updated> <published>2019-03-22T04:24:58Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Pandoc seems to handle putting multiple Markdown files into an ePub file as if they were all of one document, as the table of contents of an ePub that I generated using Pandoc listed all instances of H1 headings as top level items, rather than making them sub items of a top level item that accorded to each individual file inputted. One file included multiple H1 headings, and the headings were all represented in the TOC. I used the [[guide here that shows how to generate an ePub file from multiple files|https://pandoc.org/epub.html]], and I learned that I can place the input files' names all on one line --- I don't need to break the file names onto multiple lines, with a backslash to indicate that the same command is still proceeding on (or at least I think that's what the backslashes are for). This is all in an effort to produce ePubs prior to pushing any changes of one of my books to Leanpub --- letting me stay on their Free tier. </div> </content> </entry> <entry> <title>5th-March-2019: Writing, Ebooks, and OPDS</title> <link href="5th-March-2019%253A%2520Writing%252C%2520Ebooks%252C%2520and%2520OPDS.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-03-05T19:22:09Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> If I wish to make an OPDS catalog for ebooks of my own creation, refer to https://specs.opds.io/opds-1.2.html#23-acquisition-feeds. That should be the most relevant article about such until OPDS 2.0 is launched, but [[that's still in draft form|https://drafts.opds.io/opds-2.0] and who knows how well developers will actually transition away from OPDS 1 to 2 --- I'm concerned that few will update [[the apps that support OPDS 1|https://wiki.mobileread.com/wiki/OPDS#eBook_Reading_Software_Supporting_OPDS]], delaying its rollout and ability to grow in use. </div> </content> </entry> <entry> <title>23rd-February-2019: Econometrics Books</title> <link href="23rd-February-2019%253A%2520Econometrics%2520Books.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-02-23T05:42:17Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> [[When Ryan let me know about the Mastering Econometrics videos from Marginal Revolution University|20th-February-2019: Economics]], he also told me that there's books based around them. Turns out that the books're either introductory (//[[Mastering ‘Metrics|http://masteringmetrics.com/]]//) or more-advanced (//[[Mostly Harmless Econometrics|http://www.mostlyharmlesseconometrics.com]]//). The latter interests me more after reading the below testimonial that's on its website, as I have more of an interest in applied [[Microeconomics]]. > “MHE is a fantastic book that should be read cover-to-cover by any young applied micro economist. The book provides an excellent mix of statistical detail, econometric intuition and practical instruction. The topic coverage includes the bulk of econometric tools used in the vast majority of applied microeconomics. I wish there was an econometric textbook this well done when I was in graduate school.” > > — Bill Evans, University of Notre Dame </div> </content> </entry> <entry> <title>23rd-February-2019: SQL & Python tutorials</title> <link href="23rd-February-2019%253A%2520SQL%2520%2526%2520Python%2520tutorials.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-02-23T05:34:05Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I found two tutorials that seem very useful. * [[SQL Tutorial|https://mode.com/sql-tutorial/]]: Seems to cover from Basic to Advanced skills in [[SQL]] while using the host's editor, but the skills that you'll learn are likely general enough that they're transferable to other editors. Recommended highly on HN. * [[Python Tutorial|https://mode.com/python-tutorial/]]: "Learn Python for business analysis using real-world data." Seems to set an analyst on a good path for using [[Python]] for data analysis. They link to additional resources for further growth. </div> </content> </entry> <entry> <title>20th-February-2019: Economics</title> <link href="20th-February-2019%253A%2520Economics.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-02-20T14:49:39Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Ryan shared with me that there's a video series on [[Econometrics]] from the people that make a bunch of other videos that are shown in econ courses: [[Mastering Econometrics|https://www.mruniversity.com/courses/mastering-econometrics]] from Marginal Revolution University, which I hadn't heard of until now ... but I recognized some of the videos from classes! :) </div> </content> </entry> <entry> <title>20th-February-2019</title> <link href="20th-February-2019.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-02-20T11:02:40Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I found a number of [[Python]]-oriented online courses and tutorials! Some of which are from the University of Helsinki. * [[Geo-Python|https://geo-python.github.io/2018/]] (seems to be released yearly, with updates): "teaches you the basic concepts of programming using the Python programming language in a format that is easy to learn and understand (no previous programming experience required)." * [[Automating GIS-processes|https://automating-gis-processes.github.io/2018/]] (seems to be released yearly, with updates): "course teaches you how to do different GIS-related tasks in Python programming language. Each lesson is a tutorial with specific topic(s) where the aim is to learn how to solve common GIS-related problems and tasks using Python tools. We are using only publicly available data which can be used and downloaded by anyone anywhere." * [[Introduction to Quantitative Geology|https://introqg.github.io/qg/]]: "This course introduces students to how to study a handful of geoscientific problems using a bit of geology, math, and Python programming. The course is aimed at advanced undergraduate students in geology or geophysics." * [[Python Testing and Continuous Integration|http://katyhuff.github.io/python-testing/]] While looking for more courses posted online like this, I found these resources from The Carpentries (discussed further down): * [[Plotting and Programming in Python|http://swcarpentry.github.io/python-novice-gapminder/]] Other lessons offered by The Carpentries are [[R-programming]]-oriented. General GIS course, too ... for some reason (a lead in to advanced lessons): * [[Introduction to Geospatial Concepts|https://datacarpentry.org/organization-geospatial/]] --- > The Carpentries teach foundational coding, and data science skills to researchers worldwide. [[The Carpentries|https://carpentries.org]] seems cool (look below for links to lessons), but their website needs work. Finding lessons for self-learning the material that they tackle isn't quite clearly discoverable --- it's hidden in the Teach nav item and from there you need to know if you want lessons regarding [[Data|http://datacarpentry.org/lessons/]], [[Software|https://software-carpentry.org/lessons/]], or [[Library|https://librarycarpentry.org/lessons/]] carpentries. It //seems// like they're not making self-learning an objective, preferring to instead funnel people into workshops (click Learn in navigation links, see items directing you to workshops). --- Fascinating paper linked to by Software Carpentry (//"Teaching basic lab skills for research computing"//), regarding researchers' use of Automation, (Data) Version Control, Documentation, Task Management: Matthew Gentzkow and Jesse Shapiro: "[[Code and Data for the Social Sciences: A Practitioner's Guide|https://people.stanford.edu/gentzkow/sites/default/files/codeanddata.pdf]].", 2014. --- I also found this lesson article that's about using [[Git]] in [[RStudio]], part of a general text about using Git: http://swcarpentry.github.io/git-novice/14-supplemental-rstudio/index.html Programming with R: http://swcarpentry.github.io/r-novice-inflammation/ addresses a lot //except// for visualization and plotting. The lesson of which [[this article|https://datacarpentry.org/r-socialsci/04-ggplot2/index.html]] is a part might scratch the DataVisualization itch, but it doesn't seem too exhaustive. A course on Databases and [[SQL]]: http://swcarpentry.github.io/sql-novice-survey/ * It even addresses how to access the database with [[Python|http://swcarpentry.github.io/sql-novice-survey/10-prog/index.html]] and [[R|http://swcarpentry.github.io/sql-novice-survey/11-prog-R/index.html]] Lastly, some [[Shell scripting]] tutorials: http://swcarpentry.github.io/shell-novice/ --- !! Economics [[Time Series Analysis for Business Forecasting|http://home.ubalt.edu/ntsbarsh/Business-stat/stat-data/Forecast.htm]] --- It seems like the book [[Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython|http://shop.oreilly.com/product/0636920023784.do]] from O'Reilly is well-regarded (I've seen it mentioned a lot) and its contents seem extensive. The above was mentioned in [[this Reading List for a Fundamentals of Data Science course|https://wiki.cs.astate.edu/index.php/CS5623_Fall_2018_Reading_List]]. --- Apparently we can make books from JupyterNotebooks! See [[this repo (jupyter-book)|https://github.com/jupyter/jupyter-book]] for more. For an example, see [[the data8 book (which is quite good)|https://www.inferentialthinking.com/chapters/intro.html]]. </div> </content> </entry> <entry> <title>15th-January-2019: Beeware</title> <link href="15th-January-2019%253A%2520Beeware.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-01-15T08:00:57Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I messed up my setup of Python and other packages. I get the following error, now, when I attempt to install `beeware`. Neither Beeware nor Toga are working. ```bash $ pip install --upgrade --pre beeware Collecting beeware Using cached https://files.pythonhosted.org/packages/34/10/7e2afc95a9290e827b83b634b982dfd969f908f6dbbfa6ecd153532c7863/beeware-0.1.1-py2.py3-none-any.whl Collecting toga>=0.3.0.dev2 (from beeware) Using cached https://files.pythonhosted.org/packages/04/73/3ded37ce19d9657be425fffd1ee76d7cde26395baab17c01548db0230927/toga-0.3.0.dev11-py3-none-any.whl Requirement already satisfied, skipping upgrade: briefcase in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from beeware) (0.2.8) Collecting toga-gtk==0.3.0.dev11; sys_platform == "linux" (from toga>=0.3.0.dev2->beeware) Using cached https://files.pythonhosted.org/packages/58/03/fd1b50d3e694191119e828e1455309e265cb80a17ea3a1f39eb0b411dc26/toga_gtk-0.3.0.dev11-py3-none-any.whl Requirement already satisfied, skipping upgrade: cookiecutter>=1.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from briefcase->beeware) (1.6.0) Requirement already satisfied, skipping upgrade: voc>=0.1.1 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from briefcase->beeware) (0.1.6) Requirement already satisfied, skipping upgrade: pip>=18.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from briefcase->beeware) (18.1) Requirement already satisfied, skipping upgrade: requests<3.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from briefcase->beeware) (2.21.0) Requirement already satisfied, skipping upgrade: urllib3<1.24 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from briefcase->beeware) (1.23) Requirement already satisfied, skipping upgrade: boto3>=1.4.4 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from briefcase->beeware) (1.9.78) Requirement already satisfied, skipping upgrade: setuptools>=40.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from briefcase->beeware) (40.6.3) Collecting pygobject>=3.14.0 (from toga-gtk==0.3.0.dev11; sys_platform == "linux"->toga>=0.3.0.dev2->beeware) Using cached https://files.pythonhosted.org/packages/59/9c/57ec6ad0d57c5f621b4f3c2256a7087d27a81b8c5a92237ac2f3fe66406c/PyGObject-3.31.2.dev0.tar.gz Installing build dependencies ... done Requested pygobject>=3.14.0 from https://files.pythonhosted.org/packages/59/9c/57ec6ad0d57c5f621b4f3c2256a7087d27a81b8c5a92237ac2f3fe66406c/PyGObject-3.31.2.dev0.tar.gz#sha256=e6b2dbd2de84bdc2d4e867300f2f43ee36dcd6b7d2a7fec115404e321c1eb852 (from toga-gtk==0.3.0.dev11; sys_platform == "linux"->toga>=0.3.0.dev2->beeware), but installing version 3.31.2.dev0 Requirement already satisfied, skipping upgrade: gbulb>=0.5.3 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from toga-gtk==0.3.0.dev11; sys_platform == "linux"->toga>=0.3.0.dev2->beeware) (0.6.1) Requirement already satisfied, skipping upgrade: pycairo>=1.17.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from toga-gtk==0.3.0.dev11; sys_platform == "linux"->toga>=0.3.0.dev2->beeware) (1.18.0) Requirement already satisfied, skipping upgrade: toga-core==0.3.0.dev11 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from toga-gtk==0.3.0.dev11; sys_platform == "linux"->toga>=0.3.0.dev2->beeware) (0.3.0.dev11) Requirement already satisfied, skipping upgrade: jinja2>=2.7 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from cookiecutter>=1.0->briefcase->beeware) (2.10) Requirement already satisfied, skipping upgrade: future>=0.15.2 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from cookiecutter>=1.0->briefcase->beeware) (0.17.1) Requirement already satisfied, skipping upgrade: whichcraft>=0.4.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from cookiecutter>=1.0->briefcase->beeware) (0.5.2) Requirement already satisfied, skipping upgrade: poyo>=0.1.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from cookiecutter>=1.0->briefcase->beeware) (0.4.2) Requirement already satisfied, skipping upgrade: click>=5.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from cookiecutter>=1.0->briefcase->beeware) (7.0) Requirement already satisfied, skipping upgrade: binaryornot>=0.2.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from cookiecutter>=1.0->briefcase->beeware) (0.4.4) Requirement already satisfied, skipping upgrade: jinja2-time>=0.1.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from cookiecutter>=1.0->briefcase->beeware) (0.2.0) Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from requests<3.0->briefcase->beeware) (2018.11.29) Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from requests<3.0->briefcase->beeware) (3.0.4) Requirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from requests<3.0->briefcase->beeware) (2.8) Requirement already satisfied, skipping upgrade: s3transfer<0.2.0,>=0.1.10 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from boto3>=1.4.4->briefcase->beeware) (0.1.13) Requirement already satisfied, skipping upgrade: botocore<1.13.0,>=1.12.78 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from boto3>=1.4.4->briefcase->beeware) (1.12.78) Requirement already satisfied, skipping upgrade: jmespath<1.0.0,>=0.7.1 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from boto3>=1.4.4->briefcase->beeware) (0.9.3) Requirement already satisfied, skipping upgrade: travertino>=0.1.0 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from toga-core==0.3.0.dev11->toga-gtk==0.3.0.dev11; sys_platform == "linux"->toga>=0.3.0.dev2->beeware) (0.1.2) Requirement already satisfied, skipping upgrade: MarkupSafe>=0.23 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from jinja2>=2.7->cookiecutter>=1.0->briefcase->beeware) (1.1.0) Requirement already satisfied, skipping upgrade: arrow in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from jinja2-time>=0.1.0->cookiecutter>=1.0->briefcase->beeware) (0.13.0) Requirement already satisfied, skipping upgrade: docutils>=0.10 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from botocore<1.13.0,>=1.12.78->boto3>=1.4.4->briefcase->beeware) (0.14) Requirement already satisfied, skipping upgrade: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from botocore<1.13.0,>=1.12.78->boto3>=1.4.4->briefcase->beeware) (2.7.5) Requirement already satisfied, skipping upgrade: six>=1.5 in ./anaconda3/envs/BeeWare/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.13.0,>=1.12.78->boto3>=1.4.4->briefcase->beeware) (1.12.0) Building wheels for collected packages: pygobject Running setup.py bdist_wheel for pygobject ... error Complete output from command /home/brandon/anaconda3/envs/BeeWare/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-r59b_a2i/pygobject/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-gmg9b7wz --python-tag cp37: running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.7 creating build/lib.linux-x86_64-3.7/pygtkcompat copying pygtkcompat/generictreemodel.py -> build/lib.linux-x86_64-3.7/pygtkcompat copying pygtkcompat/pygtkcompat.py -> build/lib.linux-x86_64-3.7/pygtkcompat copying pygtkcompat/__init__.py -> build/lib.linux-x86_64-3.7/pygtkcompat creating build/lib.linux-x86_64-3.7/gi copying gi/_compat.py -> build/lib.linux-x86_64-3.7/gi copying gi/_gtktemplate.py -> build/lib.linux-x86_64-3.7/gi copying gi/module.py -> build/lib.linux-x86_64-3.7/gi copying gi/_error.py -> build/lib.linux-x86_64-3.7/gi copying gi/types.py -> build/lib.linux-x86_64-3.7/gi copying gi/_ossighelper.py -> build/lib.linux-x86_64-3.7/gi copying gi/pygtkcompat.py -> build/lib.linux-x86_64-3.7/gi copying gi/_option.py -> build/lib.linux-x86_64-3.7/gi copying gi/docstring.py -> build/lib.linux-x86_64-3.7/gi copying gi/importer.py -> build/lib.linux-x86_64-3.7/gi copying gi/_propertyhelper.py -> build/lib.linux-x86_64-3.7/gi copying gi/_constants.py -> build/lib.linux-x86_64-3.7/gi copying gi/_signalhelper.py -> build/lib.linux-x86_64-3.7/gi copying gi/__init__.py -> build/lib.linux-x86_64-3.7/gi creating build/lib.linux-x86_64-3.7/gi/repository copying gi/repository/__init__.py -> build/lib.linux-x86_64-3.7/gi/repository creating build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/GLib.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/keysyms.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/Gtk.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/GIMarshallingTests.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/Gio.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/GdkPixbuf.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/Gdk.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/Pango.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/__init__.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/GObject.py -> build/lib.linux-x86_64-3.7/gi/overrides running build_ext pycairo: new API Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1283, in <module> main() File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1278, in main zip_safe=False, File "/tmp/pip-build-env-jzi58_uk/lib/python3.7/site-packages/setuptools/__init__.py", line 143, in setup return distutils.core.setup(**attrs) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-build-env-jzi58_uk/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 188, in run self.run_command('build') File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1115, in run self._setup_extensions() File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1110, in _setup_extensions add_pycairo(gi_cairo_ext) File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1093, in add_pycairo ext.include_dirs += [get_pycairo_include_dir()] File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 915, in get_pycairo_include_dir include_dir = find_path(find_new_api()) File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 860, in find_new_api import cairo File "/tmp/pip-build-env-jzi58_uk/lib/python3.7/site-packages/cairo/__init__.py", line 1, in <module> from ._cairo import * # noqa: F401,F403 ImportError: /tmp/pip-build-env-jzi58_uk/lib/python3.7/site-packages/cairo/_cairo.cpython-37m-x86_64-linux-gnu.so: undefined symbol: cairo_tee_surface_index ---------------------------------------- Failed building wheel for pygobject Running setup.py clean for pygobject Failed to build pygobject Installing collected packages: pygobject, toga-gtk, toga, beeware Running setup.py install for pygobject ... error Complete output from command /home/brandon/anaconda3/envs/BeeWare/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-r59b_a2i/pygobject/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-_fmf3r51/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.linux-x86_64-3.7 creating build/lib.linux-x86_64-3.7/pygtkcompat copying pygtkcompat/generictreemodel.py -> build/lib.linux-x86_64-3.7/pygtkcompat copying pygtkcompat/pygtkcompat.py -> build/lib.linux-x86_64-3.7/pygtkcompat copying pygtkcompat/__init__.py -> build/lib.linux-x86_64-3.7/pygtkcompat creating build/lib.linux-x86_64-3.7/gi copying gi/_compat.py -> build/lib.linux-x86_64-3.7/gi copying gi/_gtktemplate.py -> build/lib.linux-x86_64-3.7/gi copying gi/module.py -> build/lib.linux-x86_64-3.7/gi copying gi/_error.py -> build/lib.linux-x86_64-3.7/gi copying gi/types.py -> build/lib.linux-x86_64-3.7/gi copying gi/_ossighelper.py -> build/lib.linux-x86_64-3.7/gi copying gi/pygtkcompat.py -> build/lib.linux-x86_64-3.7/gi copying gi/_option.py -> build/lib.linux-x86_64-3.7/gi copying gi/docstring.py -> build/lib.linux-x86_64-3.7/gi copying gi/importer.py -> build/lib.linux-x86_64-3.7/gi copying gi/_propertyhelper.py -> build/lib.linux-x86_64-3.7/gi copying gi/_constants.py -> build/lib.linux-x86_64-3.7/gi copying gi/_signalhelper.py -> build/lib.linux-x86_64-3.7/gi copying gi/__init__.py -> build/lib.linux-x86_64-3.7/gi creating build/lib.linux-x86_64-3.7/gi/repository copying gi/repository/__init__.py -> build/lib.linux-x86_64-3.7/gi/repository creating build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/GLib.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/keysyms.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/Gtk.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/GIMarshallingTests.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/Gio.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/GdkPixbuf.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/Gdk.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/Pango.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/__init__.py -> build/lib.linux-x86_64-3.7/gi/overrides copying gi/overrides/GObject.py -> build/lib.linux-x86_64-3.7/gi/overrides running build_ext pycairo: new API Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1283, in <module> main() File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1278, in main zip_safe=False, File "/tmp/pip-build-env-jzi58_uk/lib/python3.7/site-packages/setuptools/__init__.py", line 143, in setup return distutils.core.setup(**attrs) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-build-env-jzi58_uk/lib/python3.7/site-packages/setuptools/command/install.py", line 61, in run return orig.install.run(self) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/command/install.py", line 545, in run self.run_command('build') File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1115, in run self._setup_extensions() File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1110, in _setup_extensions add_pycairo(gi_cairo_ext) File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 1093, in add_pycairo ext.include_dirs += [get_pycairo_include_dir()] File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 915, in get_pycairo_include_dir include_dir = find_path(find_new_api()) File "/tmp/pip-install-r59b_a2i/pygobject/setup.py", line 860, in find_new_api import cairo File "/tmp/pip-build-env-jzi58_uk/lib/python3.7/site-packages/cairo/__init__.py", line 1, in <module> from ._cairo import * # noqa: F401,F403 ImportError: /tmp/pip-build-env-jzi58_uk/lib/python3.7/site-packages/cairo/_cairo.cpython-37m-x86_64-linux-gnu.so: undefined symbol: cairo_tee_surface_index ---------------------------------------- Command "/home/brandon/anaconda3/envs/BeeWare/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-r59b_a2i/pygobject/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-_fmf3r51/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-r59b_a2i/pygobject/ ``` </div> </content> </entry> <entry> <title>9th-January-2019: Beeware</title> <link href="9th-January-2019%253A%2520Beeware.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-01-15T07:59:25Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> //Below are some notes that I took while attempting to follow Dan Yeaw's guide [[How to Rock Python Packaging with Poetry and Briefcase|http://dan.yeaw.me/posts/python-packaging-with-poetry-and-briefcase/]] to bundle up an app written with Beeware's Toga for use by general consumers (without requiring them to install a bunch of additional development stuff). I ran into issues and overcame some of them with the steps mentioned below.// `poetry init` did not work after using the install command for Poetry `$ curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python` I had to use `pip install poetry` However, I was at the library at the time, they might've been blocking a port required for that script --- Got error `cannot import name 'LegacyRepository'` when attempting to search for and install `briefcase` or `pytest` [I did this after sawing that it's added later on] Only worked after doing `pip install --upgrade --pre toga` and `pip install --upgrade --pre poetry` --- It was unclear to me which Toga package to choose when searching. Looking ahead in the guide, I guessed that it was the `toga` option, so I chose that: ``` Search for a package: toga Found 23 packages matching toga Enter package # to add, or the complete package name if it is not listed: [ 0] toga-iOS [ 1] toga-curses [ 2] toga-flask [ 3] toga-demo [ 4] toga-android [ 5] toga-django [ 6] toga-gtk [ 7] toga-qt [ 8] toga-dummy [ 9] toga-pyramid [10] toga-watchOS [11] toga-cocoa [12] toga-tvOS [13] toga-web [14] toga-win32 [15] toga-dotnet [16] toga-winrt [17] toga-winforms [18] toga-mfc [19] toga-uwp [20] toga-cassowary [21] toga [22] toga-core > 21 ``` --- The description that I provided included an apostrophe and apparently the build tool didn't handle this case, leading to an issue when I attempted to run the app for Linux: ``` File "setup.py", line 22 description='An app that shows what Beeware's Toga is capable of.', ``` The config read: `description='An app that shows what Beeware's Toga is capable of.',` --- Upon attempting to generate the Linux file, I got an error for having an old install of `pip` (error below). I fixed this by using `pip install --upgrade pip` ```bash Traceback (most recent call last): File "setup.py", line 79, in <module> 'toga-django==0.3.0.dev11', File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.6/distutils/core.py", line 134, in setup ok = dist.parse_command_line() File "/home/brandon/.cache/pypoetry/virtualenvs/bee-demo-py3.6/lib/python3.6/site-packages/setuptools/dist.py", line 347, in parse_command_line result = _Distribution.parse_command_line(self) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.6/distutils/dist.py", line 472, in parse_command_line args = self._parse_command_opts(parser, args) File "/home/brandon/.cache/pypoetry/virtualenvs/bee-demo-py3.6/lib/python3.6/site-packages/setuptools/dist.py", line 658, in _parse_command_opts nargs = _Distribution._parse_command_opts(self, parser, args) File "/home/brandon/anaconda3/envs/BeeWare/lib/python3.6/distutils/dist.py", line 528, in _parse_command_opts cmd_class = self.get_command_class(command) File "/home/brandon/.cache/pypoetry/virtualenvs/bee-demo-py3.6/lib/python3.6/site-packages/setuptools/dist.py", line 478, in get_command_class ep.require(installer=self.fetch_build_egg) File "/home/brandon/.cache/pypoetry/virtualenvs/bee-demo-py3.6/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2307, in require items = working_set.resolve(reqs, env, installer) File "/home/brandon/.cache/pypoetry/virtualenvs/bee-demo-py3.6/lib/python3.6/site-packages/pkg_resources/__init__.py", line 858, in resolve raise VersionConflict(dist, req).with_context(dependent_req) pkg_resources.VersionConflict: (pip 9.0.1 (/home/brandon/.cache/pypoetry/virtualenvs/bee-demo-py3.6/lib/python3.6/site-packages), Requirement.parse('pip>=18.0')) ``` --- Issue with outdated `setuptools` --- I was using Python 3.6, apparently, and I put `^3` in my Python config file. </div> </content> </entry> <entry> <title>2nd January 2019</title> <link href="2nd%2520January%25202019.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-01-02T10:29:03Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> That persons' behavior is, in part, a function of the information available to them, especially the quality of that information (actually/objectively, and perceived/subjectively). Accordingly, seasonality of demand or supply within economics may, in part, be determined by the limited availability of information that would let persons plan accordingly for future periods. Looking ahead at temperature patterns --- historical and recent, general trends ---, for instance, to determine whether they ought to purchase a coat now versus later, when other, less-informed persons are all rushing in to the market. [One must be cautious, here, to avoid being either a NaifOrSophisticate, as suggested in IntertemporalChoice literature.] </div> </content> </entry> <entry> <title>2nd January 2019: Beeware</title> <link href="2nd%2520January%25202019%253A%2520Beeware.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-01-02T09:44:52Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> As coding of interfaces is platform agnostic but manifests differently once implemented and run, this could be a major opportunity to see how user behavior differs across platforms (according to how interface elements/widgets, such as buttons, are variously displayed on a given platform). Could implement an open source system for conveying UX statistics to libre researchers, who operate and analyze a statistical API for UX research, especially to the benefit of FLOSS creators, users. Opt-out by default to respect user privacy. </div> </content> </entry> <entry> <title>2nd January 2019: OrcMode</title> <link href="2nd%2520January%25202019%253A%2520OrcMode.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2019-01-02T08:46:58Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Use RegularExpressions (RegEx?) to extract from an OrgMode file all lines that start with a note's formatting (as produced via use of Orgzly, my key manner of using OrgMode). This would leave out the comments ("note contents") attached to a note, but I am supposing that I will not need to worry about that for now (accordingly, I'm worrying only about extracting a note's title). [Accordingly, this version of the app would be too crude to be used for fully reading and overwriting an existing file --- allow yourself to only read a file via file browser, but prompt user for where/how to save the file (to prevent overwriting).] !! Getting notes' contents I could use RegEx to find all lines in the file that accord with a note's formatting and get the line numbers for each. Where there is a gap between line numbers, I might be able to infer that those lines would be the contents of the first-listed note ("first-listed" as when computing a gap, consider [note A line number, note B line number] pairs and check for gap via `if (B-A) != 1`. !! Infer indentation level Via count of indentation syntax's usage !! Extract note statuses (states?) !! Searches in new windows Searches across all Notebooks (drawers?), across some? (Orgzly does across all if you access Search via general search menu, but in a specific Notebook if you start search in that Notebook --- good assumption, but how to convey that to user) </div> </content> </entry> <entry> <title>2nd-December-2018: Economics</title> <link href="2nd-December-2018%253A%2520Economics.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-12-02T07:37:20Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> The structure of the economics phrases like "Price elasticity of demand" felt wrong to me when I attempted to connect the structure of it to the structure of the equation (shown below) used to represent what it means. $${\Delta Quantity Demanded} \over {\Delta Price} $$ I suspect that this was confusing to me for two reasons, with the first being less important: (1) I did not yet understand how this equation works in [[Calculus]], and (2) I am a native user of the English language, which leads one to habitually think sequentially from the top left of a thing and proceed on to the bottom right of the thing. (2) Is quite damning given that (1) would suggest that this is the incorrect way of thinking of it. Consider that this equation is to be thought of as a "rise over run" scenario, with the numerator (above the division bar) dealing with a value of Y and the denominator (below the division bar) dealing with a value of X. As the X value changes, the Y value might change, too. And in fact this equation and the phrase "''Price'' elasticity of ''demand''" are meant to suggest that we should consider how //as ''price'' changes//, //quantity ''demand''ed changes//. </div> </content> </entry> <entry> <title>26th-October-2018: TiddlyWikiTweaks</title> <link href="26th-October-2018%253A%2520TiddlyWikiTweaks.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-10-26T23:27:20Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> At the top of $:/core/ui/Buttons/new-journal-here ``` \define journalButtonTags() [[$(currentTiddlerTag)$]] $(journalTags)$ \end ``` This definition seems to set a current tiddler as one of multiple of the New Journal tiddler's (created upon pressing this button) tags. This may be a step towards letting me make a non-tagged (but maybe linked!) Journal Here button, which would help to curb the number of tags produced while merely considering multiple matters. </div> </content> </entry> <entry> <title>19th-August-2018</title> <link href="19th-August-2018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-08-19T13:59:42Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Dang, man, TimeScarcity is hella stressful. Especially when someone comes at you and expects them to help them with X and to do so within the ("short") time frame that they have in mind, often already stressing about it as well. A key coping skill that I've developed in such cases is to mentally step back and consider whether the thing truly needs to be done within the time frame implied/provided/assumed. It has been helpful to recognize that I'm starting to get stressed out and realize that it's being caused by an (often dubious) assumption of that X needs to be done within time A. </div> </content> </entry> <entry> <title>1st-August-2018: RedApples</title> <link href="1st-August-2018%253A%2520RedApples.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-08-01T04:00:54Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I wonder if there is a rhetorical or philosophical significance attached to the habit within the use of the English language of introducing adjectives and such before introducing the object to which those adjectives pertain. Consider a "red apple." For those of us that use the English language in some way, it is less common for us to think and speak firstly in terms of that there an object and secondly that object has the properties of ..., which, in this case, "is" the property of being/appearing the color red. Let me consider (1) the two experiences that brought this closer to my mind and (2) some of my analysis of the matter. </div> </content> </entry> <entry> <title>28th-July-2018: Economics Education</title> <link href="28th-July-2018%253A%2520Economics%2520Education.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-07-28T08:58:40Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I wish that my econ education delved more into working with quantitative data that is based upon (derived from) subjective appraisals (subjectively determined), such as with beer ratings on Untappd, which are numerical but pertain to subjective appraisals (of the beer, of the appraiser's experience with the beer). I suspect that I'll be able to get into this more during this semester's BehavioralEconomics class. !! Steps towards this I've been considering this for establishing (constructing) some baseline of comparison for subjective appraisals at any point in time, and controlling for expectations, which I suspect play a big role in appraisals. </div> </content> </entry> <entry> <title>23rd-July-2018: SkateboardingAndPhilosophy</title> <link href="23rd-July-2018%253A%2520SkateboardingAndPhilosophy.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-07-23T04:22:00Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> A goal of mine is to start skateboarding more and pushing my skills forward, including on transitions (pools, pipes), and I'd like to reduce the risk associated with that by getting padded up. !! Purchase research Knee, Elbow, Wrist pads: https://www.skatewarehouse.com/Triple_8_Saver_Series_Pad_Set_3_Pack/descpage-T8SVPS.html * Same: https://shop.ccs.com/triple-eight-saver-series-pad-kit-3-pack-box * Figure out size * These might not be big enough for skateboarding https://www.skatewarehouse.com/187_Adult_Elbow_and_Knee_Pad_Set/descpage-87AEKX.html Helmet: https://www.skatewarehouse.com/Bullet_Deluxe_Skateboard_Helmet/descpage-BUHLMB.html Natural deck: https://www.skatewarehouse.com/Skate_Warehouse_Blank_V-Natural_Deck/descpage-SWNDDK.html (if not go for a bulk eBay order) --- I like trying new tricks and approaches, and because these things are new, that's when the probability of hurting myself is significantly high. Accordingly, if you see me wearing a helmet and pads, it's because I'm pushing myself. --- A lot of extreme sport media is currently oriented around whether a particular activity is intensely difficult for the average athlete in that sport (if not just the average general person). And that isn't as interesting to me as people make it out to me. Bear in mind, here, that the average skateboarder might not even be able to kickflip, so, really, what are we accomplishing here? I look forward to seeing an extreme sports media that gives us the sense that each athlete is pushing themselves, learning, and having fun. This is not the average skateboarder, this is *the* skateboarder. --- Suppose that I am preferring to only practice one type of kickflip, really high, conservative, and low-speed ones, while there's many other types of kickflip that I could do: low ones, wobbly ones (think: shifties!), high-speed ones --- in general, perhaps it has been the case that rather than forming an understanding of kickflips in general, with a bunch of variation amidst the sequential sets that form each type of kickflip, I have been over-fitting my understanding to the one model that I prefer, thereby possibly leading me to attempt to apply an improper model (the one I prefer) to irrelevant conditions. </div> </content> </entry> <entry> <title>23rd-July-2018: Visualization</title> <link href="23rd-July-2018%253A%2520Visualization.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-07-23T02:54:59Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Text visualization, from a corpus: https://github.com/corajr/zotero-voyant-export </div> </content> </entry> <entry> <title>22nd-July-2018</title> <link href="22nd-July-2018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-07-22T03:25:53Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Of philosophical triangulation Steps towards the TheoryGraph: * Cognitive labor-saving devices/conventions, such as ** Symbolic chunking of points and textual expansion where/when necessary, allowing for quicker/effective (less confusing, risky given possible provision of ambiguity) parsing ** Some textual equivalent of a moving variable value table, as in Thonny, RStudio [this is possibly a subpoint of the above] * Argument graphing, akin to offering tables of contents that include enumerations of subpoints [possibly using RailroadDiagrams], but as stored in programmatically-parseable forms (text) and visual forms </div> </content> </entry> <entry> <title>14th-July-2018: RSSforStaticTiddlyWiki, Atom feed dev</title> <link href="14th-July-2018%253A%2520RSSforStaticTiddlyWiki%252C%2520Atom%2520feed%2520dev.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-07-14T21:00:25Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> At first I was using [[this article|https://www.codeguru.com/csharp/csharp/cs_network/internetweb/article.php/c12711/ASPNET-Tip-Creating-an-Atom-XML-Feed.htm]] as an example atom feed for producing the atom template, but it was too simple/limited in scope. A bigger [[spec for atom feeds|https://validator.w3.org/feed/docs/rfc4287.html#rfc.section.1]] is found here, and the code that I was working from before seems to have been copied from there. Below is the example atom feed given in the previous link. It contains lots of rich metadata, and I'm unsure how I should fill this in. Permalinks, for instance, have me a bit worried. I'm wondering whether I might be able to get assistance with filling in the gaps from the folks in [[the TiddlyWiki Google group|https://groups.google.com/forum/#!forum/tiddlywiki]]. ```xml <?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> <title type="text">dive into mark</title> <subtitle type="html"> A &lt;em&gt;lot&lt;/em&gt; of effort went into making this effortless </subtitle> <updated>2005-07-31T12:29:29Z</updated> <id>tag:example.org,2003:3</id> <link rel="alternate" type="text/html" hreflang="en" href="http://example.org/"/> <link rel="self" type="application/atom+xml" href="http://example.org/feed.atom"/> <rights>Copyright (c) 2003, Mark Pilgrim</rights> <generator uri="http://www.example.com/" version="1.0"> Example Toolkit </generator> <entry> <title>Atom draft-07 snapshot</title> <link rel="alternate" type="text/html" href="http://example.org/2005/04/02/atom"/> <link rel="enclosure" type="audio/mpeg" length="1337" href="http://example.org/audio/ph34r_my_podcast.mp3"/> <id>tag:example.org,2003:3.2397</id> <updated>2005-07-31T12:29:29Z</updated> <published>2003-12-13T08:29:29-04:00</published> <author> <name>Mark Pilgrim</name> <uri>http://example.org/</uri> <email>f8dy@example.com</email> </author> <contributor> <name>Sam Ruby</name> </contributor> <contributor> <name>Joe Gregorio</name> </contributor> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> <p><i>[Update: The Atom draft is finished.]</i></p> </div> </content> </entry> </feed> ``` </div> </content> </entry> <entry> <title>13th-July-2018: RSSforStaticTiddlyWiki</title> <link href="13th-July-2018%253A%2520RSSforStaticTiddlyWiki.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:44Z</updated> <published>2018-07-13T19:25:55Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Need to use the template tiddler [[$:/core/templates/static.template.html]] as a basis for creating a multi-tiddler output, but that tiddler seems to use templating in a way that I'm unfamiliar with. In particular, it pulls in stuff via: `{{$:/core/ui/PageTemplate||$:/core/templates/html-tiddler}}` Oh! I thought that the first part read as PostTemplate, which I further mistook for $:/core/ui/ViewTemplate. I suspect that I know what's afoot here, now; let's dive in! The file looks OK (maybe deceptively simple), but the lack of indentation among blocks prevents us from really keeping track of what's working and associated with what, so let's add some indentation: ``` \define containerClasses() tc-page-container tc-page-view-$(themeTitle)$ tc-language-$(languageTitle)$ \end <$importvariables filter="[[$:/core/ui/PageMacros]] [all[shadows+tiddlers]tag[$:/tags/Macro]!has[draft.of]]"> <$set name="tv-config-toolbar-icons" value={{$:/config/Toolbar/Icons}}> <$set name="tv-config-toolbar-text" value={{$:/config/Toolbar/Text}}> <$set name="tv-config-toolbar-class" value={{$:/config/Toolbar/ButtonClass}}> <$set name="themeTitle" value={{$:/view}}> <$set name="currentTiddler" value={{$:/language}}> <$set name="languageTitle" value={{!!name}}> <$set name="currentTiddler" value=""> <div class=<<containerClasses>>> <$navigator story="$:/StoryList" history="$:/HistoryList" openLinkFromInsideRiver={{$:/config/Navigation/openLinkFromInsideRiver}} openLinkFromOutsideRiver={{$:/config/Navigation/openLinkFromOutsideRiver}} relinkOnRename={{$:/config/RelinkOnRename}}> <$dropzone> <$list filter="[all[shadows+tiddlers]tag[$:/tags/PageTemplate]!has[draft.of]]" variable="listItem"> <$transclude tiddler=<<listItem>>/> </$list> </$dropzone> </$navigator> </div> </$set> </$set> </$set> </$set> </$set> </$set> </$set> </$importvariables> ``` ''Side note:'' I currently forget why, but as I learned previously when mucking around with TW, the second half of the code we first saw, `{{$:/core/ui/PageTemplate||$:/core/templates/html-tiddler}}`, is important, because without it, the content of $:/core/ui/PageTemplate (shown above) would be spit out to wherever it's transcluded, but without any HTML tags. This is not intuitive. I'm now cheating to understand what's afoot: I've looked up what the `<$set>` blocks are/do, via the TW website, and they're called "set widgets". They don't help me, I think; they just assign values to a given variable. It's looking like the `<$list ...` widget is what I need to look into. The fact that it ends with `variable="listItem"` and that `listItem` also appears in the below/contained transclude widget, and appears to function as the name of the tiddler being transcluded with the transclude widget, suggests that looking into the `variable`, uh, parameter(??? or is it an "argument"? or is a parameter the thing to which a thing, an argument (arg) is posed?) should shed light on what's going on. HAH! After looking into the list widget, I see that I was sorta way off in terms of guessing as to what's going on. `variable` is literally allowing you to ONLY assign a name of your choosing to something that exists regardless of whether you call upon it explicitly or not: a variable that contains a list's currently-considered list item. So why might this be assigning a name to it, unnecessarily? I don't know. In the docs it suggests that the name given to this variable is, by default, `currentTiddler`, which you'll notice in the code for PageTemplate appears a number of times. Further, the fact that `<$set name="currentTiddler" value="">` comes up before the list widget leads me to wonder why it is that someone needed to bother with assigning an empty value to `currentTiddler` prior to this list. Is it simply because it was set earlier, by `<$set name="currentTiddler" value={{$:/language}}>`? I'll play a fool and not only not do anything of this sort but instead proceed with the following, notably ripping out the `variable="listItem"` stuff and replacing `listItem` with the default name of `currentTiddler`. ``` <$list filter="[all[shadows+tiddlers]tag[$:/tags/PageTemplate]!has[draft.of]]"> <$transclude tiddler=<<currentTiddler>>/> </$list> ``` OK. After some 30 minutes of struggling, I seemed to have figured out an issue I had when going down this route. When I run the block of code above in a new tiddler, the tiddler has a BUNCH of formatted and stylized posts (as though the View Template were being used upon each list item), etc. I did not expect this to be the case. Next, I tried changing `<<currentTiddler>>` to my stupid tiddler, named [[Hi]]. Suddenly, there's the tiddler, but without the View Template formatting. Plain as day. //WAT.// Come to find out, if we run the above code block but delete the part of the list widget's filter that reads `tag[$:/tags/PageTemplate]`, leaving behind only `[all[shadows+tiddlers]!has[draft.of]]`, it works as expected: all posts, outputted without the View Template formatting. This is quite strange behavior, and I look forward to exploring Filters more to understand why this is happening. But for now, we can proceed with the simple list and transclude widgets! !! Jul 14th Investigating into the various tiddler templates: * static (View Template) * raw-static (View Template, as html) * plain-text-tiddler (Pre-Wikified text of tiddler, as plain text) Wait I think I'm using this wrong. I think that the tiddler [[journals-feed]] need only be used as a template for plain text file output, at this point. After testing, I see that I'm correct. Using the command line command, I was able to generated a plain text file containing a list of all Journal posts, given the `journal` tag filter (`[tag[Journal]]`): ```bash tiddlywiki publicwiki --render "journals-feed" journals-feed.txt text/plain "" exportFilter "[tag[Journal]]" ``` This is pretty much a straight copy of the command given in [[the render command's documentation|https://tiddlywiki.com/#RenderTiddlerCommand]]. If you do not specify the double quotes (`""`) before `exportFilter`, no static file is produced. I'm not sure why. To go beyond plain text formatting, we need to add in HTML or XML blocks. To do this, we need to put tick marks around the code blocks so that they are preserved while the content outside of the tick marks is generated via TiddlyWiki's WikiText stuff. Before we go further, let's clone the journals-feed tiddler and name the new tiddler "journals-feed-atom", as I hope to make a general atom feed for journals/etc. I've now modified the new tiddler in accordance with this [[article regarding making atom feeds|https://www.codeguru.com/csharp/csharp/cs_network/internetweb/article.php/c12711/ASPNET-Tip-Creating-an-Atom-XML-Feed.htm]]. Generating the atom XML file works, as expected, with the following command: ```bash tiddlywiki publicwiki --render "journals-feed-atom" journals-feed-atom.xml text/plain "" exportFilter "[tag[Journal]]" ``` Whoops! I need to add a way of sorting this so that the newest entries are listed first! I'll add this to both templates. The [[docs for Filter Operators|https://tiddlywiki.com/#Filter%20Operators]] will be invaluable to me, here. Or not. I couldn't figure out which thing fit what I needed, so I jumped over to [[Tobias Beer's TiddlyWiki Filter Examples page|http://tobibeer.github.io/tw/filters/#Filter%20Examples]] and all-but immediately found what I needed: to add `!sort[modified]` to the templates' list filters. I'll work in the tiddler [[14th-July-2018: RSSforStaticTiddlyWiki, Atom feed dev]] on finishing up the atom feed itself, as that'll be a big discussion. --- Let's get the raw form: ``` {{journals-feed}} {{journals-feed||$:/core/templates/plain-text-tiddler}} ``` --- ! Todo Figure out how to: Use filter to get only 1 post that matches filter, to set the Modified/Updated date of the feed to the date of the modification of the most recent post listed in the feed, as opposed to the date of modification of the tiddler for which the feed is being made for/from (assuming that I wish to offer feeds based upon some one particular Tiddler [tag]; could expand in the future to include multiple tags). Figure out how to combine text output with WikiText stuff, like widgets --- I was unintentionally wise in choosing "the template tiddler [[$:/core/templates/static.template.html]] as a basis for creating a multi-tiddler output", as I meant to use alltiddlers instead, which is a supreme deadend and not even really what we need. </div> </content> </entry> <entry> <title>13th-July-2018: InTheInterestOfBuildingCrossLanguageConfidenceAndCompetence</title> <link href="13th-July-2018%253A%2520InTheInterestOfBuildingCrossLanguageConfidenceAndCompetence.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:44Z</updated> <published>2018-07-13T12:11:09Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> //I first wrote the following to Javier Alvarado, my friend.// Regarding your interest in helping people build their confidence [and competence] in using different programming languages, have you used a visual debugger before? I used one for the first time recently and man, it cleared up so many of my questions about how Python works. And Python is my first language. Consider the notion of presenting learners with side-by-side visual debuggers, one with the language they're comfortable with, and one that they're becoming comfortable with. I suspect that a learner can be trusted to provide and test equivalent code, if they are comfortable enough with one language, as I'm assuming here. </div> </content> </entry> <entry> <title>31st-May-2018: ProjectIdeas < AccentedAudioVideoCategorization</title> <link href="31st-May-2018%253A%2520ProjectIdeas%2520%253C%2520AccentedAudioVideoCategorization.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-31T10:03:04Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Listening to material presented by persons with accents that differ from your own can be quite the mental workouts, fun when undertaken at our leisure and tedious otherwise. Perhaps a registry of accents could be developed that let people establish greater empathy for speakers with significantly different accents than one's own and ability in comprehending what they mean. Moreover, this could help those of us in the FakeAccentSociety acquire our fake accents much quicker. --- This was brought up again to me while listening to [["The Opening Lines of Romeo and Juliet Recited in the Original Accent of Shakespeare’s Time"|https://laughingsquid.com/romeo-juliet-recited-in-original-shakespeare-accent/]] </div> </content> </entry> <entry> <title>31st-May-2018</title> <link href="31st-May-2018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-31T09:29:25Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Modeling personal finances in terms of marginals and averages, in accordance with [[my idea of Payroll Psychology|http://brandon.zeroqualms.net/money-quickly-gained-is-quickly-lost/]], which Prof. Kelly pinned down for me as a matter of wealth effects upon Demand ("wealth" in terms of perceived increases in one's income; perceived, but not actual, in monetary terms --- my thought is moreso that individuals' feelings regarding their income levels may tend to be far less grounded in their averaged-out levels of income than the perceived growth rate of their bank accounts, at and around t* [when one is paid, and some time thereafter]) WisdomEconomics is less of a tweak on conventional economic theory than it is an exploration of where reality (people's actual behavior) diverges from theory (people's theoretically normative behavior) as a result of people thinking differently about the matter. (To the degree that //thinking// [explicitly or implicitly/subconsciously] //about doing X// precedes //doing X//.) How might people be thinking about this [X] but in a muddied or less straight-forward way? --- In averaging out //expected inflows and outflows// across the entire time period that's being considered, growth rates for such expected flows are zero. This could be very useful, psychologically/emotionally. </div> </content> </entry> <entry> <title>23rd-May-2018: MotivationalSignificance</title> <link href="23rd-May-2018%253A%2520MotivationalSignificance.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-23T13:19:53Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Suppose that I did a geometric average of the sequential significance and linear significance functions. That way, neither terribly dominates the other at all time points, preventing the former from being motivationally significant up to some //t*// and thereafter the latter being motivationally significant. (Threshold of motivational significance.) (This addresses concerns that at some point the former's curve intersects the latter's and for //t// thereafter lies lower than it.) Here's a graph of each of these functions, considered: Data used for this graph, as generated from the following equations: | !t | !Linear | !Sequential | !Geom Avg | | 1 | 0.1 | 0.5 | 0.2236067977 | | 2 | 0.2 | 0.6666666667 | 0.3651483717 | | 3 | 0.3 | 0.75 | 0.474341649 | | 4 | 0.4 | 0.8 | 0.5656854249 | | 5 | 0.5 | 0.8333333333 | 0.6454972244 | | 6 | 0.6 | 0.8571428571 | 0.7171371656 | | 7 | 0.7 | 0.875 | 0.7826237921 | | 8 | 0.8 | 0.8888888889 | 0.8432740427 | | 9 | 0.9 | 0.9 | 0.9 | | 10 | 1 | 1 | 1 | Linear: `t/10` Sequential: `t/(1+t)` Geom Avg: `SQRT( Linear * Sequential )` --- BLAH I realized that I've framed the above in terms of totals, not marginals. This theory zooms in on marginal changes, yet I've presented them in the roundabout form of totals. </div> </content> </entry> <entry> <title>22nd-May-2018</title> <link href="22nd-May-2018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-22T18:54:44Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Reasonable expectation of non-requirement, non-disruption, un-interruption </div> </content> </entry> <entry> <title>22nd-May-2018: Beeware</title> <link href="22nd-May-2018%253A%2520Beeware.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-22T10:21:22Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> How do I install Toga? Get Started just tells us to install it, but does not refer us to somewhere that tells us *how*. https://toga.readthedocs.io/en/latest/how-to/get-started.html </div> </content> </entry> <entry> <title>22nd-May-2018: TiddlyWikiPluginForQuickGitCommits</title> <link href="22nd-May-2018%253A%2520TiddlyWikiPluginForQuickGitCommits.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-22T10:14:40Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> LMAO I just learned that [["tiddly" means "slightly drunk" in British slang|https://www.thefreedictionary.com/tiddly]]. The name of GitTiddly officially works on more levels than intended! \o/ </div> </content> </entry> <entry> <title>21st-May-2018: Beeware</title> <link href="21st-May-2018%253A%2520Beeware.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-21T19:22:07Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Test to see if Toga can access file contents from a given directory. (In preparation for when FILE (FOLDER) OPEN dialog support is provided.) Using the file read code from Sololearn, this [[answer on SO about getting a user's home directory|https://stackoverflow.com/a/4028943]], and then hardcoding file reading: IT WORKS! I can pull in the contents of a file in the current user's home directory. I plugged this code into [[an existing Toga interface|https://toga.readthedocs.io/en/latest/tutorial/tutorial-2.html]] to see if the proper output is generated in the terminal when one clicks on a button, and sure enough it works! :D Given that my code isn't likely to be used for projects besides my own for a while, I think that I can hardcode file locations relevant to me, until FILE dialogs are provided. This will let me flesh out GUI tools quickly, especially for my Git & TiddlyWiki thing :)) --- Beeware Users Group (BUG)! </div> </content> </entry> <entry> <title>16th-May-2018</title> <link href="16th-May-2018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-16T03:52:10Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Subjective ease & efficiency of re-use of superset configurations versus their initial establishment That my anxiety seems to have been set aside once I trusted that if I feel that my superset has been correctly set, my subset need not be worried about (as opposed to feeling like I do need to worry, due to a creeping worry that I'm not focusing upon the right stuff) AmIFocusedYet, title for a collection of essays along these lines. Firstly dealing with the consideration of what it means to be focused, by way of presenting a view that splits the notion of focus between Focusing Among all Possible Things (dubbed the establishment of a "superset" for an individual) and Focusing Among a Superset (dubbed the establishment of a "subset"). Notion that ADHD, for instance, could involve changes in set composition/configuration on either of these levels, but the more critical level is that of superset. There's also the idea that there is an attention-allocating process in the mind and that the allocative process draws mental resources away from the things one wants to provide significant attention and effort towards, limiting the amount of such resources that can be given to it. After a certain degree of indiscriminacy in the provision of % shares of one's total attention, the allocative process ceases or becomes far less relevant and prominent as a channel (object?) unto which attention/resources are given --- OH! this might suggest that a degree of freedom and resource abundance may be introduced into the system if the allocative process reduces its own (somewhat fixed?) share requirements at the same time as other objects' shares of mental resources become both equal and (somewhat) constant. --- Regarding [[Mind-Set]] </div> </content> </entry> <entry> <title>13th-May-2018: Misdirectives</title> <link href="13th-May-2018%253A%2520Misdirectives.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-05-13T20:14:43Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> The notion of "Zero to One" movements, as suggested by Peter Theil, may be deliberately misdirective in that though the notion of moving from zero to one may hold with respect to the matter of the impact of a given action or innovation, the manner by which one gets to such a point is often not zero to one but the seeds of the consequent are present some time before its fruition (thus the seeds lie within the range between zero and one). Theil alludes to this manner of thinking through his book Zero to One, so it's interesting to think that he, who has displayed strategic thinking else where, could be duping people (who do not read closely enough) into believing that a movement from zero to one is possible and key. This is connected to my notion of leveraging what I have to get what I want, which has its roots in what I have (as opposed to going from 0 to 1) </div> </content> </entry> <entry> <title>1st-April-2018: InteractiveChartsForIntuitionBuilding</title> <link href="1st-April-2018%253A%2520InteractiveChartsForIntuitionBuilding.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-04-01T00:45:59Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Graphically denote which phase of graph each plot represents: ''original'' or ''new''. I was thinking that if they were firstly shown in the same color, but the ''new'' graphs would be thicker (a larger `linewidth` value). Be sure to output the new line so that it overlaps the old line (is layered over it) First plot: unlabeled, smaller linewidth, alpha reduced Could use dotted lines to represent hypothetical movements --- This is a great resource for info about using Matplotlib: http://www.scipy-lectures.org/intro/matplotlib/matplotlib.html#simple-plot !! Quiz system (stating, testing hypotheses; checking whether a variable changed after being presented with a graphical change) "Baseline plot" might be hardcoded or set in relative terms, relative to the previous run of the code --- the latter could get confusing. </div> </content> </entry> <entry> <title>30th-March-2018: Liberapay</title> <link href="30th-March-2018%253A%2520Liberapay.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-03-30T08:23:20Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Yesterday I learned about the notion of "clearinghouses," and I realized that [[Liberapay]] counts as a clearinghouse! Clearinghouses are places/entities that hold onto a pot of money that is likely to change hands, for the entities that are likely to pass around claims to that money; these make the matter of handling money more efficient when transactions tend to take place between the same entities, as only net exchanges will need to take place instead of gross exchanges, letting fees be put aside. This makes sense for mutual supports within-industry and bringing in funding from sources that interact with other industries and markets. </div> </content> </entry> <entry> <title>21st-March-2018: TiddlyWikiTweaks</title> <link href="21st-March-2018%253A%2520TiddlyWikiTweaks.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-03-21T15:29:16Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Funny, I pretty much figured out what I'd need to make an Atom feed for a TiddlyWiki, but one roadblock would be just updating the feed's date once a new thing is published (see [["Feed updation Date"|https://www.tutorialspoint.com/rss/feed.htm]]). There might be some way to assign to have the parent template wait for the first sibling (a new entry) to provide the correct date, but that sounds difficult to make. ... Silly me! This would be a trivial issue to overcome: # shut down all TW servers # use Bash to modify the date to $currentDate on all of the atom feed pages (tiddlers) that need to be generated # run the generation script, starting back up the TW servers! It's that simple! All I'd need is some additional understanding of Bash programming! 😄 </div> </content> </entry> <entry> <title>19th-March-2018: Liberapay</title> <link href="19th-March-2018%253A%2520Liberapay.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-03-19T17:35:46Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I suspect that Liberapay needs (would benefit from) a way for people without money on the site (or even people with it) to follow accounts without either pledging money to them or donating. Being able to check in to creators' receiving amounts or to even get notified of whether the creator is even on the site would all be nice and likely lead to increased deposits by patrons and increases in funding for creators. Not to mention that this would greatly improve the degree of utility of the site as a source of satisfaction and interest. Seeing the funding rates of someone you're interested in can be cool! (Maybe even watching the funding rates of your sworn enemies, but let's not think about that.) This could also lead to the introduction of lists of persons that one's interested in supporting, including lists that one could share publicly. Say, for instance, that you like person A's stuff and person A likes a bunch of other people/projects that you don't know about: you'd get to learn of the latter persons and support them. --- Unrelated to this, exactly, is getting notified if communities tied to persons you support or are interested in are growing or shrinking, especially in accordance with the growth rates of the persons you follow's funding rates. [[Liberapay Stats Analysis]] </div> </content> </entry> <entry> <title>13th-March-2018: BloggingAndWiki</title> <link href="13th-March-2018%253A%2520BloggingAndWiki.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:44Z</updated> <published>2018-03-13T14:38:01Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Part of my thoughts regarding [[BloggingAndWiki]], from <https://news.ycombinator.com/item?id=2589897>: ``` _delirium on May 26, 2011 | parent | un-favorite | on: The problem with blogging I remember being really confused over the chronological ordering for non-diaries when it first become common, and am still not that sure it's the right thing, though it does have benefits for technological convenience (blog software) and lower activation energy (just "write a blog post"). It used to be common to have websites, and then a website might have a "recent updates" or "new articles" page with a reverse-chronologically-ordered list of recent updates, often named new.html or updates.html or something. But it wasn't the main site; just something for frequent visitors to check. But sometime around 2000-2003 or so, people just started throwing up everything on the equivalent of the "new articles" page! Seemed sort of a strange way to organize anything that wasn't a livejournal. Also seems to indicate a bit of a reduction in long-term ambition: people used to see building a website as an incremental endeavor, where you were slowly building up an edifice, so there was a clear separation between the long-term goal for the site (a resource w/ information presented logically) and the order in which you happened to add each piece (the recent-updates page). Wikis still have that edifice-building angle (the "Recent Changes" page is clearly not the main page), so maybe it's just that they've taken over that role so completely that the only non-wiki things left are blogs and webapps, with no more "regular" websites? ``` I especially appreciate the line about "long-term ambition" with respect to website creation and, more generally, publishing upon the web: > people used to see building a website as an incremental endeavor, where you were slowly building up an edifice, so there was a clear separation between the long-term goal for the site (a resource w/ information presented logically) and the order in which you happened to add each piece (the recent-updates page). </div> </content> </entry> <entry> <title>10th-March-2018: DissonanceOfFocus</title> <link href="10th-March-2018%253A%2520DissonanceOfFocus.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:44Z</updated> <published>2018-03-10T09:08:57Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Difficulty with understanding the whole thing of the efficiency decision and its importance amidst wage rate changes due to a dissonance of focus when the wage rates increase. Efficiency decision's equation: (MPl/w = MPk/r), or marginal product of labor over the wage rate //ought to (according to theory)// equal the marginal product of capital (K) over the rental rate [of capital]. I suggest that I'm getting confused by a dissonance of focus because of the following: Wage rates increase, leading the ratio of MPl to w (MPl/w) to shrink [w UP, but the ratio as a whole DOWN, as MPl remains constant], and this changes the efficiency decision's equation, making it unequal, as the change in the wage rate (w) does not necessarily change the values on the right hand side of the equation. The equation thus becomes: (MPl/w < MPk/r) According to the logic of the efficiency decision (rule?), one should reinvest money currently used for labor into capital, as it's more efficient to use one's money there, as the ratio (MPk/r) is now GREATER THAN (MPl/w). Logically, this makes sense, but when I'm thinking quickly about these matters I get confused. I suspect that it is because firstly I am focusing upon the wage, and I am focusing upon it either too much or in the wrong way. "w is going up! Oh no!" I think first. "The equation is changed, and now it is unequal!" I think second. But thirdly, my mind is muddied by the relation between the change in the wage and the direction of the inequality; "wage is up, so shouldn't the inequality say that the side with the wage is GREATER THAN what it was before?" The matter is made worse, I suspect, when the resulting inequality is given to me. "Ah, there's the wage, and there's the inequality sign! LESS THAN! Wages have gone down, BUY BUY BUY" </div> </content> </entry> <entry> <title>19th-February-2018</title> <link href="19th-February-2018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-02-19T21:16:17Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Lay out your school materials in such a way that you may love it and love it more readily. </div> </content> </entry> <entry> <title>3th-February-2018</title> <link href="3th-February-2018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-02-04T14:21:33Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Scope of effect (related to effect size), number of people effected by one's efforts. Degree of effect (Total Effect) as measured by the simplified model of NumOfPeopleEffected and EffectAmountPerPerson; I suspect that there can be ways of marginally decreasing/increasing latter while increasing former to a much greater degree (Elasticity of Effect). #BizPhil #Ideas #philosophy Mind you that the Total Effect model treats EffectPerPerson as homogenous across all people involved. Marketing can serve as a means of boosting both factors of Total Effect. Regarding the degree to which an idea is good or not, or will provide value that translates to revenue of the amount that you require (you need to eat, pay payroll). --- Fixation upon notion of whether X provides value, or whether it could at a later time (delayed onset). Considered from the standpoint of WHEN it provides value, the magnitude of value is considered. But what is left unconsidered is the probability (likelihood) of you getting to that point. Many factors influence that probability, even in unintuitive ways. #Ideas #philosophy The notion that something "scatters your force" (Emerson), diluting the degree to which you have power, or influence, or affect. The Harry Potter principle of leadership being rightly entrusted to, if anyone, they who do not seek to have it. The notion of power has a negative connotation surrounding it — "he is power-hungry", we say. But let's slow down, let's consider how we're using that term in those cases. There is power, one has it or hopes to have it, but I think we agree that those things alone are not what are distasteful about it; rather, we dislike that the power would enrich others and likely rob us of something. But what about when the power-full person is content? Maybe they can influence some matter to others' benefit. Tangent: Of transactions being permissible when each party gets something from it. To get, you must be able to provide. </div> </content> </entry> <entry> <title>26th-January-2018: Liberapay Stats Analysis</title> <link href="26th-January-2018%253A%2520Liberapay%2520Stats%2520Analysis.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-01-26T15:21:20Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I'm diving in to the `paydays.json` file for Liberapay ([[live|https://liberapay.com/about/paydays.json]], [[archived|https://gitlab.com/snippets/1695684]]), and oh boy has this file cleared up a bunch of issues: # ''Delineated USD/EUR data!'' The most obvious improvement over my scraped data is that the currency data in this JSON file is broken down by currency. ## (Come to find out, the original data actually was a sum of both currencies, with USD converted to EUR! I had no idea that this was the case while looking at the Liberapay Stats graphs, but [[I was told so in Liberapay's Gitter chat by the head developer|https://gitter.im/liberapay/salon?at=5a699744c95f22546de17ecb]].) ## This should help with isolating which weeks were pre/post-USD inclusion. (I was unsure about this) # ''Date/Time-Series Data!'' No more need for me to wonder about which week data corresponds to which dates! ## This should help with future analyses. I'm now in it with RStudio, learning how to take it apart. Learned to use the R library `jsonlite`, and I'm following this guide here: https://cran.r-project.org/web/packages/jsonlite/vignettes/json-aaquickstart.html I'm not certain whether or not I should worry about making a CSV from this, as at least for myself, I'm learning how to work with it itself from within RStudio. However, I do think that a CSV format of it would remove some technical barriers to analysis (say, by Quantative Econ students --- or even myself when a professor demands that I use EViews). Given this, I think that making a CSV from it would be a good form of community service + further exercise in [[R-lang]]. !! CSV Structuring But how should I structure it? Putting same columns next to each other (but with diff currencies) is confusing, so I'll make one set of columns per one currency and then another set with the other currency. All will be with one record, as before. [[My previous Liberapay scraper|https://gitlab.com/snippets/1695276]] worked in terms of making lists for each column, then combined all columns into a central DataFrame. I think that I'll do the same thing for now because (1) it's familiar to me, which I'd think would make things faster for me, and (2) I don't intend to be doing this very often, so even if this process is inefficient (as I suspect that it is) it's fine. </div> </content> </entry> <entry> <title>24th-January-2018: Liberapay Stats Analysis</title> <link href="24th-January-2018%253A%2520Liberapay%2520Stats%2520Analysis.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-01-24T14:16:21Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I'm in a support limbo with GitLab right now, but I intend to publish the Liberapay scraped data to GitLab. I might just post it to GitHub for now as a Gist to get it out there. I've also saved it to this wiki as [[liberapay-scrape.csv]] </div> </content> </entry> <entry> <title>23rd-January-2018: TiddlyWikiTweaks</title> <link href="23rd-January-2018%253A%2520TiddlyWikiTweaks.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-01-23T17:26:44Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Realized that Neocities sends traffic to site upon each push, likely to generate the thumbnails of how the site looks post-update. All traffic is regular and comes from the Network Domain "policmedia.com", so [[I filtered it out of my future Google Analytics reports using this guide|https://support.google.com/analytics/answer/1033162?hl=en]]. </div> </content> </entry> <entry> <title>21st-January-2018: TiddlyWikiTweaks</title> <link href="21st-January-2018%253A%2520TiddlyWikiTweaks.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-01-21T10:06:31Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Figure out how to add a title or alt text to transcluded image tiddlers, possibly via an alt or title field and field value ($:/core/modules/widgets/image.js should let us do this) This is useful for making the SVG title image accessible. Creating this feature is a bit too over my head, as it requires following the thread of `src` and deciphering what is and is not essential to my creating an optional feature. However, there is some precedent for optional features, as the [[TiddlyWiki docs show a couple of different, non-essential parameters|https://tiddlywiki.com/static/ImageWidget.html]]. Follow those threads, duplicate and remix them. </div> </content> </entry> <entry> <title>21st-January-2018: Liberapay</title> <link href="21st-January-2018%253A%2520Liberapay.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-01-21T09:59:09Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Addressing friction in signup, money deposit (as far as I'm aware) Lack of solid (but general?) documentation of how you should file your taxes if you received any money, or if givers can receive tax benefits (ooh!) * This is likely due to their previous exclusive use of Euro, which is used by a broad number of countries; though the use of it allows for a greater audience size, it also allows for larger diversity in tax-filing procedures, as each country likely requires a different process (subtly different or not) </div> </content> </entry> <entry> <title>21st-January-2018: R-programming and Liberapay</title> <link href="21st-January-2018%253A%2520R-programming%2520and%2520Liberapay.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-01-21T09:29:04Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> It would be very helpful if we could layer the Pre and Post-USD data's correlation plots, using opacity to see where there's overlap and difference. I know you can do layering with ggplot2, but I've been using corrplot for correlation plotting; I found [[ggcorr|https://briatte.github.io/ggcorr/]] while googling for "layer corrplot" and it seems like something much closer to what I want! </div> </content> </entry> <entry> <title>11th January 2018</title> <link href="11th%2520January%25202018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:44Z</updated> <published>2018-01-11T09:33:34Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> I first considered this when reading [["The 5 Switches of Manliness: Challenge", an article about the supposed importance of challenge for men|https://www.artofmanliness.com/2011/06/05/the-5-switches-of-manliness-challenge/]]. I'm not sold on the idea that challenge is only a core thing for men. </div> </content> </entry> <entry> <title>10th January 2018</title> <link href="10th%2520January%25202018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:44Z</updated> <published>2018-01-10T15:16:22Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> Realized last night that on a static site I'd need to disable TiddlyWiki's option to "link to tiddlers that do not exist yet" ($:/core/ui/ControlPanel/Settings/MissingLinks), if they're in CamelCase, because broken links lead a site to be penalized by search engines (so this is an [[SEO]] concern). Problem is, if I disable this outright, I'd miss out on a useful core feature for when editing the wiki contents. So I was stuck with an either/or decision. Or was I? Maybe I could temporarily disable that configuration when generating the static site! I already use a bash script to generate the site, so maybe I can just add this to the script pre-generation and re-enable it afterwards! (I never considered doing this manually, which I'm kinda too lazy to do, not to mention that this would be ripe for UserError). The config tiddler for that feature looks like this: ``` created: 20180110215313115 modified: 20180110230252715 title: $:/config/MissingLinks type: text/vnd.tiddlywiki yes ``` We just need to change a small part of it to make this work (changing `yes` to `no`, and back again). But maybe we can do it even easier! Maybe we can store the contents of the file in a variable. I figured out how to do this, [[here|https://stackoverflow.com/questions/2789319/file-content-into-unix-variable-with-newlines]]. Then we can push to that file whichever contents we want to add to it. ```bash echo -e "created: 20180110215313115\nmodified: 20180110230252715\ntitle: $:/config/MissingLinks\ntype: text/vnd.tiddlywiki\n\nno" > 'publicwiki/tiddlers/$__config_MissingLinks.tid' ``` --- https://unix.stackexchange.com/questions/219268/how-to-add-new-lines-when-using-echo https://www.tecmint.com/echo-command-in-linux/ </div> </content> </entry> <entry> <title>9th January 2018</title> <link href="9th%2520January%25202018.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2018-01-09T12:29:42Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> With the title of this article, I started to approach the wiki like [[TVTropes|http://tvtropes.org/]] </div> </content> </entry> <entry> <title>30th December 2017</title> <link href="30th%2520December%25202017.html"/> <!-- <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id> --> <updated>2019-03-08T16:45:43Z</updated> <published>2017-12-30T08:30:41Z</published> <content type="xhtml" xml:lang="en" xml:base="http://diveintomark.org/"> <div xmlns="http://www.w3.org/1999/xhtml"> This was straightened out for me, here: https://www.investopedia.com/terms/m/marginal-revenue-product-mrp.asp Long story short, the statistic(?) to watch is MarginalRevenueProduct. Oh! No wonder this is phrased this way! It's just like with ``Marginal Product [of] Labor``, the "of" is implicit after "Marginal Revenue" and the rest is a clarification --- in full, it ought to read ``Marginal Revenue of Product``. [Delineate the concept here and pipe it back into the original entry] !! Application If considered with respect to BootyEconomics, the unit of input considered might be Booty and the output might be stimulation/shock/arousal of the viewer. Up to a certain point, the latter trends upward with each additional Booty employed (as this considers "each additional," we're talking about MarginalProduct). The latter would also be associated with revenue derived by the producer/provider, but due to DiminishingMarginalUtility, consumers demand less and less of the latter, the more and more that they have, providing less revenue to the provider (as the provider must decrease the price of the thing to sell any additional units). !! Today What prompted me to look into this is that I saw a video recording of a conference talk in which an "expert" online video maker was giving tips on making videos for marketing purposes. I confess that I did not watch the video, but it prompted a cynical analysis of online video that's been lingering in my mind for a while: Present-day online video tends to involve quite shocking/stimulating ways of presenting information and narratives. </div> </content> </entry> </feed>