A common problem that retail traders face is the ability to test whether their trading strategy is profitable (and if so, when).
Several platforms, such as QuantConnect (https://www.quantconnect.com/) and MetaTrader 4 allow traders to build expert advisors and test their strategies, but most traders don't have the programming skills required to make use of them nor do they want to dedicate valuable trading time to developing those skills (learning to program is hard work!)
Two months ago, Matt decided to turn his Fiverr Gig (https://www.fiverr.com/mc2147/put-your-trading-strategy-into-code-and-backtest-it-for-you) into a productized service that would automate and test the strategies of retail traders against years of historical data. Seeing the passion that I had for finance, he invited me on board to form a team of three with Dilip Ojha, another programmer and close friend of Matt's from high school.
I decided to join the team as I see this business as a great way to not only learn more about finance but also explore the small yet growing community that combines finance and technology.
The service is simple - we take a trader's strategy and convert it into code. The most interesting part of the work is taking a very human and often complex decision and turning it into technical instructions that a machine can understand.
Our site (and business) is still in its early stages, but we're working hard to release a more complete version by mid-June. Consider this our "landing page" :)
Feel free to have a look and message me if you're interested or have any suggestions for our site or business! We'd love to receive feedback on design, company structure, pricing, or anything else you have ideas about!
IDEA:
SpeedER is a web application that seeks to reduce the amount of time between the time of injury and the time of care for the patient. It allows anyone to create a profile in which they enter in their health history and personal information. This allows a hospital to quickly pull up and view patient profiles when patients arrive--or even when they’re on the way. When the patient injures himself or herself, he or she can report the injury with a few clicks on the platform and discover the optimal emergency room to go to by comparing the estimated travel time and estimated wait time at nearby emergency rooms. The overall goal of the platform is to ensure that injured patients receive treatment quickly by eliminating the time spent waiting in line and filling out forms.
TECHNOLOGY:
* Node.js for backend
* Pug for frontend (https://pugjs.org)
* Various Javascript libraries for testing (mocha, chai, sinon, etc.), user authentication (passport.js), encryption (bcrypt.js), database queries (sequelize.js), etc.
* MySQL (MySQL server + workbench for internal visualization)
* Google Maps API
* Python for automatically adding information to databases (like Hospital geospatial coordinates)
FUNCTIONALITY:
Please note that SpeedER has three different users: Patients, Medical Staff, and Doctors.
> PATIENT WALKTHROUGH:
After clicking the "Patient" button and subsequently the "Get Started" button on the homepage, you will be redirected to the patient signup page.
First, please input the proper requirements. The sign up form will notify you if you are inputting information that is not allowed (for instance, if you input "testing" instead of "testing@test.com" for email).
After you have successfully signed up, you can log into your account and view your patient profile.
A "Health Profile" contains information about a patient's age, weight, height, gender, insurance provider, etc. You can directly edit any of these fields and then click on the "Save" button below to update these changes. You will stay on the same page after clicking the button, but you can navigate to another page and come back to see that these changes are in fact persistent.
The "Issue an Injury Report" page takes your current location into account and finds the hospitals closest to you. We currently only support viewing hospitals in Chicago, so if you are testing this page anywhere else, hospitals will not show up for you on the map.
First, pick a hospital that is either the closest to you or has the lowest wait time (your preference!).
Next, fill out the injury report form, which just requires a brief description of your injury and a selection of how you are feeling on a scale from 1 to 10 (this is known as the patient discomfort level, which we will discuss later when hospital administrators assign patients to doctors).
Note that you can use the speech recognition feature (click the microphone icon and then start speaking) to document a description of your injury.
The "Past Injuries" view contains information about all past injury reports a patient has filed.
Please note that as soon as a patient issues an injury using our map view, that injury is documented and stored in our database. Thus, injury reports appear immediately in the "View Past Injuries" view.
> DOCTOR WALKTHROUGH:
Doctors' main functionality is to just sign themselves up for SpeedER so that they will be added to our database and so that we can query for doctors belonging to particular hospitals. This information is needed when trying to assign a patient to a doctor.
After clicking the "Doctor" button and subsequently the "Get Started" button on the homepage, you will be redirected to the doctor signup page.
Please note that doctors and medical staff members belong to the same database table, so a doctor cannot sign up as a medical staff member, and vice versa. However, a doctor, as well as a hospital administrator, can sign up as a patient because the patient database table is separate.
In the first part of the sign up process, the doctor is just sending their information to the database so that any hospital administrator who belongs to the same hospital can verify the doctor's status and existence.
The doctor waits until a hospital administrator verifies them. Upon this verification, the doctor receives an email notification with a verification code.
Now the doctor navigates back to the website and inputs the verification code.
When a medical staff member assigns a doctor to a patient, a doctor receives an email with the patient's name, and the patient's description of the injury.
> HOSPITAL ADMINISTRATOR/MEDICAL STAFF WALKTHROUGH:
After clicking the "Hospital Administrator" button and subsequently the "Get Started" button on the homepage, you will be redirected to the hospital administrator signup page.
Please note that doctors and medical staff members belong to the same database table, so a doctor cannot sign up as a medical staff member, and vice versa. However, a doctor, as well as a hospital administrator, can sign up as a patient because the patient database table is separate.
After successfully signing up, a hospital administrator can access their profile, where they can perform a variety of tasks.
A hospital administrator can review doctors' names and emails and make sure these credentials align with those of real doctors at their hospital. If these credentials are valid, a hospital administrator checks the doctors' names and sends all of them an email with their respective code.
The "Assign Patient to Doctor" page allows a hospital administrator to match an incoming patient with a doctor at the specific hospital the hospital administrator works at.
The left column of this page refers to the different levels of patient injuries. We take this information from the patient's injury report (patient's discomfort level), and sort the patients according to the highest level of patient down to the lowest.
The middle column includes a box where the administrator can "drop" a patient and a doctor and then make an assignment.
The right column includes a list of unassigned doctors (basically doctors who are free to treat patients) who work at that hospital.
You essentially click on a patient and drag them into the assignment box. Then you can click on a doctor and drag them into the assignment box. Finally, you click on "Make Assignment" to match a patient to a doctor. Upon this action, the doctor receives an email notification with the patient's name and injury description.
Please note that only one patient can be assigned to one doctor.
A hospital administrator can unassign currently assigned doctors once they know a doctor is done treating a patient. The fact that a hospital administrator knows that a doctor has finished treating a patient is external to this application.
This page not only unassigns doctors but also "discharges" patients that were assigned to that doctor. We update this information in our database.
IDEA:
I have been using the Pen Tool in Adobe Illustrator for the past few years for logo design, and I am still intrigued by it. I've always been curious about how the tool works behind the scenes.
I, like most other initial users, had a difficult time understanding how to use the Pen Tool. My friends familiar and frustrated with the tool always asked me, "How do you prevent the lines from overlapping one another?"
I wanted to create a basic, stripped-down version of the tool that people could use and test out before they use Illustrator's Pen Tool.
TECHNOLOGY:
> Elm:
http://elm-lang.org/
DOCUMENTATION (HOW TO USE APPLICATION):
Components:
> CANVAS -- where you can draw paths, which can be used to create vector icons, images, etc.
> PEN
>> Points -- these are essentially "Anchor points," which help you establish the flow of the path
>> Paths -- smooth representations of outlines of shapes or straight or curved lines. The strength of paths lies in the fact that paths are vectors and thus can be scaled up to any size and won't appear to be pixelated.
>> Handles -- this implementation of the Pen Tool uses cubic Bezier Curves, which have two control points (a bottom and a top). Handles represent these two control points, which are connected by a line. Paths between two points are typically drawn from the first point's bottom control point to the second point's top control point. If you simply click on the canvas and don't create any handles, then that point's control points are simply the point itself.
Getting Started:
1. Click anywhere on the canvas to start a path.
2. Click and hold the mouse down to establish handles for a point (to give the point control points).
3. Move the mouse to see what the next subpath looks like.
4. Click on another point and hold down to establish new handles and see how the subpath begins to curve.
5. Release the mouse when you are satisfied with the shape of the current subpath.
6. Under the "Canvas" interface, click "End" when you want to end your path. The extending line is drawn automatically as the mouse moves, but it will disappear once the mouse leaves the canvas -- thus do not worry about the extending line being drawn.
Features:
> End: End will end your current path at your last anchor point and allow you to begin drawing a new path (essentially on a "layer" above the current path).
> Undo: Undo will undo the last subpath drawn on the current working path. Once you have clicked "End," this path cannot be accessed again, and thus you cannot undo on this "finished" or "archived" path. Imagine that once you click "End," your path is placed on a layer than cannot be accessed again.
> Clear: Clear simply clears the canvas but keeps track of all your other preferences.
> Show All Handles: This feature (default Unchecked) allows you to see all the handles of your current working path for guidance.
> Show All Points: This feature (default Checked) allows you to see all the anchor points of the current working path, which represent the flow of your path. To see what the resulting path looks like so far, you can turn this feature off.
> Show Exercise: An image of a Pen Tool Exercise (from http://design.tutsplus.com/tutorials/illustrators-pen-tool-the-comprehensive-guide--vector-141) will appear on the canvas. See if you can recreate all the paths.
> Canvas Opacity Slider: Change the opacity of the canvas so you can better see your points, paths, etc.
> Canvas Red, Green, Blue Sliders: Change the color of the canvas.
> Similar Opacity and Color Sliders for Paths, Points, and Handles: all changes will happen immediately and will only affect your current working path, not your previous "finished" paths.
> Stroke Weight Slider for Paths: Make the path appear thinner or thicker.
> Point Size Slider for Points: Increase or decrease the size of the anchor points.
PACKAGES, REFERENCES, AND HELP:
I heavily used the "elm-html" and "elm-svg" libraries to implement the all the visuals. I specifically used the SVG package to draw the points, paths, and handles. I specifically used the Html package for event handling and for organizing components on the page.
Elm Packages:
> http://package.elm-lang.org/packages/evancz/elm-svg/2.0.1/Svg
> http://package.elm-lang.org/packages/evancz/elm-html/4.0.2/
Elm Help:
> http://elm-lang.org/guide/interop
> http://elm-lang.org/examples/checkboxes
Svg:
> https://www.w3.org/TR/SVG/shapes.html
> https://www.w3.org/TR/SVG/paths
Credits (for icons and exercise):
> http://simpleicon.com/pen_tool.html
> http://design.tutsplus.com/tutorials/illustrators-pen-tool-the-comprehensive-guide--vector-141
CHALLENGES:
Signalling:
The initial challenges I had with this project revolved around signalling. The program heavily relies on user input and appropriate event handling. Initially, I tried to work with Elm's primitive signals. I quickly realized that I couldn't handle multiple signals at the same time using primitive signalling functions like "merge", "mergeMany," or "sampleOn" because they often gave me an asynchronous representation of the canvas and the points, especially when I was trying to debug and used several "Html.text" components as "print" statements.
I looked that the TodoMVC example which used Elm-Html to see how signals were being treated. The example used mailboxes, so I decided to review mailboxes. Mailboxes definitely solved all of my initial issues.
References:
> https://github.com/evancz/elm-todomvc/blob/master/Todo.elm
> https://www.classes.cs.uchicago.edu/archive/2016/winter/22300-1/lectures/Buttons/index.html
Representation of State:
My state essentially represents the entire program. As I was programming, I realized that I was holding so much information in so many different variables. I don't think my representation of state is the most efficient or intelligent way to represent the Pen Tool.
I followed the TodoMVC's example and used a record to represent the state.
My final representation of state:
type alias State =
{ points: List Point
, paths: List Svg.Svg
, allPaths: List (List Svg.Svg)
, controlPoints: List CtrlPoint
, mouseState: MouseState
, mouseAction: MouseAction
, currentPoint: Maybe Point
, lastPoint: Maybe Point
, ctrlCurrentPoint: Maybe CtrlPoint
, ctrlLastPoint: Maybe CtrlPoint
, displays: Display
}
The most important parts are the points, paths, and control points (which help draw the smooth curves). The "displays" field deals with all changes and events on the interface below the canvas, like changing the color of the canvas, the points, paths, or handles.
The "currentPoint," "lastPoint," "ctrlCurrentPoint," and "ctrlLastPoint" are used when drawing the extending line to the mouse, the handles, and the curve between the last point and current point when the mouse is down and the user is dragging the handles.
Mouse State and Clicks:
I initially represented the Mouse State as a generic integer, and based on the different integer values, I would have certain events occur. However, with this representation, the program could not distinguish between previous clicks and current clicks(as stated in response to my question about the clicks on Piazza). Thus, often times, multiple handles would drawn on the same point.
The response on Piazza told me to break the Mouse state into "Up" and "Down" and to have another type called MouseAction that was either "Moving" or "Stopped." Representing the mouse state this way not only helped me implement the proper functionality but also made it easier to draw handles. I didn't need to have multiple pattern matching cases just dealing with events.
Ending Paths:
I initially wasn't sure how I would end paths. I was considering keyboard shortcuts similar to the actual Adobe Illustrator shortcuts for the Pen Tool. However, I realized that most of what I was I implementing relied on Mouse actions, and to add keyboard actions would require separate Keyboard actions. Thus, I created an "End" button to end the path. This implementation eliminated the need for me to deal with the first point of the path to close the path. This implementation of ending paths is a bit simplistic and not quite like that of Illustrator, but to implement Illustrator's "close path" and "end path" (Command + Shift + a) would require addtional components to the representation of the state.
LIMITATIONS:
> Cannot close path completely: For a user to end a path, he/she has to press the "End" button on the Canvas interface. Since there is no automatic close path feature, the user has to go back to the first point of the path and make sure the mouse is directly above the initial point. Often times, it is difficult to completely align the mouse with the first point, so the resulting path isn't exactly "closed."
> User has to navigate away from the canvas to end paths. I can see this feature being a bit annoying and not very intuitive to the user, who may want to press a key or perhaps double click to end the path.
> No deletion/insertion of anchor points: this may be annoying to someone who wants to make a path where one side of an anchor point is curved and the other is straight. My implementation sidesteps this by having the user end the path when it is curved and start a new path at the end point and make sure the new path is straight. Again, due to the issue of not closing paths completely and not allowing the mouse to highlight the point your mouse is over, the resulting path may not be aligned correctly.
> No support for filled in paths. Filled in paths would require me to keep track of the initial point of the path, so that each time a new subpath is added, the path would need to use the "closepath" attribute provided by SVG and close the path from the new point to the initial point and based on the points between the first point and the last point. Ideally, I would have liked to implement this feature if I had a bit more time.
> No Highlighting: When you mouse over a point on a path in Illustrator, the path and point are highlighted. I was not able to even think about highlighting when I was implementing this project.
> No Automatic "Rulers": When you press "Shift" in Illustrator while using the Pen Tool, the handles are essentially "locked" on an angle, so they don't rotate at all.
> Not allowing the user to upload an image file and place it on the canvas "underneath" the points, paths, and handles. The image would serve as a reference, similar to placing a scanned image of your drawing and using the Pen Tool on top of it to create fonts, logos, etc. I researched ways to upload files to an Elm application, and most forums stated to use Ports and Javascript. Not having much Javascript experience outside of front-end rendering in React Native and not much time to finish the project, I decided against allowing users to upload files and instead included a "practice exercise" (credits are below) that is commonly used to teach users how to use the Pen Tool and deal with handles.
> I also realized that I have a lot of issues when the window is resized while drawing a path. I didn't have much time to figure out why that happens, so I currently ask the user to refresh the page every time they resize the window.
IDEA:
WeEat was an app created out of the registered student organization The International Leadership Council, Technology Division in late 2015.
The idea behind the app was to save UChicago students money on food deliveries from local and distant restaurants via bulk ordering. WeEat would partner with these restaurants and feature their menus on our app. Students would use the app and see which restaurant menus were available that day. They could then order from the available menus before a certain deadline. During and after ordering, the app would inform the students of where they could pick up their order on campus.
WeEat would send the students' orders to the appropriate restaurants. Restaurants would arrive at a specified location on campus to distribute orders and collect payments from the students.
TECHNOLOGY:
> React Native:
https://facebook.github.io/react-native/
We decided on using React Native to build a native mobile app. Since we aimed to build for both iOS and Android, React Native would easily enable us to reuse code between both versions of the app.
> Microsoft Azure:
https://azure.microsoft.com
We used Microsoft Azure for our mobile backend.
MY ROLE:
I predominantly worked on designing our app. I devised a color scheme and brainstormed logo designs. Once I was finished with initial designing, I created the log in page and user information page and worked on storing information locally to the phone.
> Color Scheme:
While brainstorming color schemes for the design, I consulted my favorite color theory reference:
https://www.smashingmagazine.com/2010/01/color-theory-for-designers-part-1-the-meaning-of-color/
I knew most other food apps utilized red (GrubHub, Yelp, DoorDash, etc.), so I wanted to distinguish our app by using another base color. I thought blue would convey a sense of trust and reliability, as stated in the color theory article. I wanted to combine dark and light blue to respectively reflect professionalism and creativity.
However, this past summer (2016), I read "Design For Hackers: Reverse Engineering Beauty" (if you're interested in design, please read this book!) by David Kadavy. He mentions that the color red invokes hunger, which is why most apps related to food use red. I aim to refactor this information into our design and potentially change the color theme.
> Logo Design:
Because the title of the app clearly indicates that the app is food-related, I thought the logo could be more flexible. I wanted the design to be fun since our audience consisted of other college students. At first, I thought I could show a sense of community by spelling out "We" with the "W" in We and "E" in "Eat." Next, I considered various symbols such as birds, clouds, maps, and take-out boxes. I thought birds would represent a quick delivery, clouds would represent the technology behind the service, maps would represent how the app would connect people, and a take-out box would represent the easiness of ordering. For the last set of logos, I didn't have a concrete idea in mind -- I wanted to play around with lettering and fonts.
I asked my teammates to vote on the 22 designs I created. Most of my members liked #15, the bird created out of the take-out box. One of the other members really liked #22, so I decided to mix #15 and #22 together.
I created the vector version of the logo using Adobe Illustrator.
FUTURE:
WeEat is on hiatus, but we are currently reformulating the structure of the app. We hope to resume work in January 2017.
As my teammates spoke to restaurants, they received feedback that restaurants would prefer the app to automatically handle payments. Secure payment processing is something we need to research and integrate.
It is also possible that we may convert the app to deal with student-run food operations and businesses.
IDEA:
As huge strategy board game fans, both Daniel and I were intrigued by the idea of changing the traditional, somewhat static, board game experience. While several board games such as “The Resistance: Avalon”, feature beautiful art, the player still feels distant from the world within the game. These games require much imaginative thinking on the player’s part, so we wanted to find a way to ease this thinking and make the player feel more connected to the game through a different set of visuals.
We wanted to create a 3D board with interesting visual effects and immerse the player in a smaller world within the context of reality. We thought augmented reality (AR) would be a better platform than purely virtual reality (VR), as AR would ground the player in a room with his or her friends and would still enable the player to see his or her friends’ reactions. Logistically, it would be difficult with VR to recreate and expand upon the traditional experience of gazing down at a board at a certain angle similar to when one looks down at a traditional board game. Developing for VR would also eliminate the need for a physical board, which we still wanted, as it would represent the object connecting the players.
Since this class (CMSC 23400: Mobile Computing) focused on developing for mobile devices, we also thought about the realm of mobile games. We realized that there weren’t many multiplayer, synchronous mobile games. Most mobile app games are single-player or are multiplayer games that support many single player features, like playing on your own time, storing results, and then hearing that your friend beat your score several hours later. Ultimately, we wanted players to be physically connected to their friends while playing a mobile game. Basically, we wanted to explore this intersection of traditional board games and mobile app games.
DOCUMENTATION/HOW GAME WORKS:
Read the poster above for rules and gameplay.
CAN I TEST OUT THE GAME?
Because we were loaned Android cellphones (Google Nexus 5) for development and because neither Daniel nor I own an Android phone, we decided not to push to Google Play. We also ran into complications when pushing our project to our git repositories (as mentioned below). If you'd like to test out the game, please email me, and I can send you the Unity project.
TECHNOLOGY:
> Unity Game Engine:
https://unity3d.com/
> Vuforia Augmented Reality Extension:
https://www.vuforia.com/
> Photon Unity 3D Networking:
https://www.photonengine.com/en/PUN
The project was executed through the assistance of the Vuforia AR extension for Unity and the Photon Unity Networking library. We first worked on projecting an image on top of our representation of a board, which we chose to be a QR code to ensure reliable recognition by the camera. We tried other images, like the Avalon logo, but we believe there was too much white space for the camera to register the image as the image target under certain light and angles.
Next we developed the game board with player prefabs and static buttons. Networking followed soon after, and with networking came most of our issues, leading us to use Photon Unity Networking, which had great multiplayer support. Most of the execution focused on determining how to represent the game state, dynamically updating the game state through Unity scripts as opposed to using multiple QR codes, since the scripts made more logical sense based on what we had built so far. Networking and building the game state went hand in hand, along with debugging. Lastly, we integrated our desktop version of the application with Google Cardboard, which also required a bit of debugging and additional programming.
When we finished the project, we realized how central the Vuforia AR extension for Unity and Photon Unity Networking were to our application. Without these external libraries, our application would have been much harder to develop. We found out about Vuforia early on, through several YouTube tutorials and game developer blogs. However, we struggled with networking and stuck with Unity’s Networking API for several weeks into the project. When we heard about and researched Photon, we realized we would have to read through most of the API and discard our previous networking code. However, deciding on this change and giving Photon a chance really helped us develop a fully-fledged multiplayer game.
We also realized that since AR and VR are relatively recent developments, there weren’t many resources regarding the integration of Vuforia and Google Cardboard. We learned that development in relatively new fields consists of lots of trial and error and having time to test out different representations of game state and components. Approximately 1/3 of our development time was dedicated to debugging.
MY ROLE:
I designed the game board, which consisted of Unity models and game objects placed and organized on top of the chosen image target. I also integrated the original desktop version of the application with Google Cardboard, binding Cardboard’s two cameras to Vuforia’s AR camera and making sure the clipping planes of the cameras were aligned on all cameras.
Daniel predominantly worked on networking, first setting up a network manager using Unity’s Networking API and later setting up and integrating Photon networking into the project. He also created, added, and networked important player effects, such as the alignment of player positions on the board, the blue token that appears underneath the quest leader and the glowing blue halo that surrounds players when they have been selected to go on a quest.
We both familiarized ourselves with Vuforia, registering as developers and testing the strength of the AR extension with different QR codes and images. Once the board and the initial networking was set up (once players could join and appear on the game board), both of us worked on the game state and logic and made sure all changes were properly networked.
WHAT WE'D DO DIFFERENTLY:
> Learning about Photon earlier:
We were three weeks into development using Unity’s native networking API when we heard about Photon. We heard about it from one of my friends at UChicago (Gamal DeWeever) who develops games using Unity. We decided to give it a try since Transforms using Unity’s native networking were not showing up on UChicago’s network but were showing up when Daniel connected to his home network. Since the multiplayer aspect was a huge component of our game, we weren’t able to work on much until it was completely finished and it worked appropriately. If we had heard about Photon earlier, we would have finished the game earlier and would have been able to develop more environments and possibly a waiting room for when players join the game.
> Working on both Cardboard and Desktop from the start:
We thought finishing development for the desktop version and then integrating with Cardboard would be relatively straightforward. One of the main challenges we faced was in fact this integration. The desktop version seemed to properly respond to Mouse events. However, once we hooked up Cardboard and changed the events to Input events as opposed to Mouse events, we ran into several issues, especially when listening for Pointer Click events.
Pointer Click and Pointer Down would not work, and these two events were essential to our game. As a consequence, we had to develop our own event handling system to define Pointer Clicks using Pointer Enter and Pointer Exit. It is difficult to say, but maybe if we had built everything with Cardboard initially, we may not have run into this issue.
> Finding a source control system specific to Unity:
As we started to share our code snippets and the Unity project, we soon realized that using Git would not be the most efficient way to share files as it would take a while for files to be transferred. Instead we ended up compressing/zipping our files and sharing through Google Drive, which also took quite some time and led us to have many versions of our application on our computers.
> Have A Stronger Understanding of Networks + Have Both People Set up Photon:
Unfortunately at the time of this project, I hadn't taken a networks class (I'm planning on taking it this school year -- 2016-2017). I wish I had a better understanding of networking, and I wish had set-up Photon in order to better communicate how it works. Once Photon was integrated into our project, it made it easier to abstract networking and focus more on the game logic while programming the networking functions. The networking functions were mostly wrapper functions to functions that dealt with updating the state.
After we finished the demo, we also thought about how we could have changed the demo so that we could show that players could play while in different rooms. While this feature was not what we aimed for, it was a positive side effect of the technologies used.
UChicago is pretty big on functional programming -- my first introduction level Computer Science class was taught in Racket. I have taken a few other functional programming classes -- CMSC 22300: Functional Programming in Elm and CMSC 22100: Programming Languages (mostly in SML).
While taking CMSC 22300: Functional Programming in Elm, I realized that I enjoyed functional programming, especially the combination of functional programming and graphics. As you can see in "Elm Project 1: Pi" and "Elm Project 2: Complete Trees," Elm offers much graphics support. While thinking about my final project for the "Functional Programming in Elm" class, I approached my professor and asked him about the feasibility of implementing the Pen Tool in Elm. He told me Elm definitely had a series of libraries (Elm-Svg and Elm-Html) that could help me create the tool. He also told me to check out his personal project, Sketch-n-Sketch, a programmatic and direct svg editor.
I checked it out and found the idea very interesting, since programming graphics could allow for more precision in terms of positioning objects and symmetry. It could also combine different visual elements, like position and color -- for example, if an object is vertically placed higher than other objects, then decrease its saturation.
I started working on Sketch-n-Sketch while taking a data visualization class. The concepts I learned in the class inspired me to create a data visualization library for Sketch-n-Sketch. I have noticed that oftentimes, visualizations are so complex (too many colors, too many words, etc.) that it is difficult to extract meaning from them. While Microsoft Excel provides a wide array of charts and ways to edit the charts, the charts are not always flexible in terms of changing distance between bars, lines, etc. and in terms of using a personalized color scheme. While graphics editors such as Photoshop and Illustrator allow for users to create customized infographics, the process takes much time and effort.
I sought a way to quickly generate customizable charts for users. Users would call a function in the code window of the editor and pass in parameters such as a list of data; a list of colors; a list of strings for the chart's title; x and y axes' titles; and more, depending on the type of chart. The function call would then completely produce a chart, which would be visible in the graphics window of the editor.
Users could then use my initial template and customize it by adding to the code or the graphic produced. Professor Chugh also mentioned that users could even use the example charts as inspiration to build their own infographics or charts.
So far, I have produced vertical bar charts (regular, clustered, and stacked), pie charts (regular and donut), and pictograms. I am currently a bit behind on the project, as I still need to add more features to the charts and finish writing up blog posts. These blog posts would explain why I chose to implement a chart a certain way, briefly walk through the code and the chart's features, and explain how to call the function by passing in certain parameters.
Ultimately, I hope Sketch-n-Sketch will help artists learn how to program and will help programmers build intricate graphics and animations.
TECHNOLOGY:
> Sketch-n-Sketch:
http://ravichugh.github.io/sketch-n-sketch/releases/
Go check it out, and feel free to produce some beautiful logos! :)
> Elm:
http://elm-lang.org/
IDEA:
I have wanted to build a personal site for quite some time. Since I mostly use social media to stay in touch with friends, I thought that if people wanted to learn more about me (both professionally and non-professionally), they could check out my personal site.
The site would illustrate how art and technology have beneficially impacted my life. I have seen some amazing websites, such as
http://www.rleonardi.com/
I knew I definitely was not at that level of web development and design, but I wanted to challenge myself by creating my own theme and template. I wanted my design to reflect how I view technology. The first theme I thought of was magic because magic appears fluid to spectators, but to magicians, it’s a series of instructions that come together to produce a certain effect. I initially planned on creating an animated visual effect of the basic "Four Aces Trick" whenever someone navigated to the homepage.
However, the more I thought about implementing this theme, the more complex it seemed: I had considered drawing my own rendition of the cards and deck. In addition, with the slight amount of Javascript that I knew, I wasn't sure if I could animate the trick. I scrapped the idea and brainstormed some more. I eventually landed on the idea of fall/autumn. Fall is my favorite season because even though the weather cools down, the scenery outside is a palette of warm colors. Fall symbolizes a time of change, much like technology does.
Fall was a simple enough theme, and I ran with the idea. For my homepage, I played a bit off of the idea of young lovers who inscribe their names in tree bark. The rest of the pages follow this theme but focus more on the content; depending on the type of content, I would make the design larger or more complex. For instance, since there isn't much dynamic (in terms of adding more information on a frequent basis) content on the "About" page, I knew I could add larger and more involved designs. On pages with lots of content, especially colorful content, like the "Art" Page, I played down the overall design so that the design wouldn't clash with and overwhelm the content.
TECHNOLOGY:
> Django Web Framework:
https://www.djangoproject.com/
I used the Python Web Framework Django (version 1.8.13) to build this site. Because I had taught myself the basics of Django the year before, I settled on using Django as my web framework. While Django uses a traditional MVC (model-view-controller) pattern, it calls the pattern "MTV", for model-template-view. Models are where you register classes that represent items such as blog posts, gallery paintings, etc. Templates are html pages that serve both your hard-coded content and your dynamic models to the web. Views serve as mediators, connecting models to templates.
> Heroku: Cloud Application Platform
https://www.heroku.com/
I deployed my site to Heroku for a number of reasons:
1. Support for Django
Lots of popular personal-website-hosting platforms like GitHub Pages only serve static sites. Django generates pages dynamically, and thus is not supported on these platforms.
2. Popularity and Breadth
I imagine I will work with other frameworks like Rails in the future. I wanted to familiarize myself with a platform that supported multiple languages and frameworks and thus ruled out using PythonAnywhere.
3. Ease of deployment and cost
After reading that free Heroku applications typically will "sleep" (https://devcenter.heroku.com/articles/free-dyno-hours) after 30 minutes of inactivity, I was a bit concerned about deploying to Heroku. I was concerned that the time it would take for my website to load would dissuade people from viewing it.
However, when I read about other options, like Amazon's Elastic Beanstalk, I was driven away by the difficulty of deployment, as this was my first time deploying a web app. While I had worked on other websites before, I was not the person who deployed the site. I was also driven away by eventual pricing (good thing I was, because I eventually used Amazon's S3 for storage. If I ended up using Elastic Beanstalk and S3, I'd be billed for both, based on how much data I used). For references, see
>> http://stackoverflow.com/questions/26540417/heroku-vs-elastic-beanstalk-with-django-postgres
>> http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html
>> https://aws.amazon.com/elasticbeanstalk/pricing/
> Amazon Web Services (AWS) Simple Storage Service (S3):
https://aws.amazon.com/s3/
In Django, images, CSS, and other files are typically served through what is known as a "static" (sub)directory. Images and other files that have been uploaded through a form or through Django admin are served through a "media" (sub)directory.
After deploying to Heroku, I realized that both my static and media files were not showing up on my website. After some googling, I learned that Heroku does not store static or media files:
http://djangotricks.blogspot.com/2016/05/deploying-django-website-on-heroku.html
I quickly learned that I needed to use an external storage service. Most reference sites, like the one above, pointed to Amazon's S3 (Simple Storage Service).
Now, all my media and static files are stored in an S3 bucket.
PROCESS:
1. Think of all the pages and content
2. Select a theme
3. Create mock-ups of the pages, making sure to incorporate the selected theme
4. Start coding and run site on development server/local machine
5. Deploy website when basic functionalities are coded up
6. Make sure to provide basic cross-browser support
7. Continue making all sorts of changes to pages
DESIGN:
> Maple Leaf Image:
http://6iee.com/776749.html
I ended up vectorizing the leaf image in Illustrator so that the leaf wouldn't lose its quality upon window resizing.
> Background Stock Photo:
http://www.wildtextures.com/free-textures/dry-old-wood-texture/
I chose a large background for better quality and repurposed the image by adding color and a gradient.
> Fonts:
>> Advent Pro for page titles:
https://fonts.google.com/specimen/Advent+Pro
I didn't like Advent Pro's "a," so I used Raleway's "a" in its place. Pedantic? Yeah, I know.
>> Raleway for page titles:
https://fonts.google.com/specimen/Raleway
>> Helvetica Neue (inherited from Bootstrap CSS) for post (Art, Blog, and Links) titles
>> Gill Sans for content
> Favicon:
I created the favicon from scratch using Photoshop. I initially planned on including a maple leaf but decided against it once I zoomed out and viewed my original design. The leaf was difficult to discern and detracted attention from the other elements.
>> Here is the site I used to generate different sizes of the favicon for cross-browser support:
http://faviconit.com/en
One thing I have learned from designing is to never quit until you are satisfied. I have rarely used an initial design as the final product. I usually end up tweeking elements in the process. Since design is often subjective, it is a good idea to ask other people to critique your work.
The first photo below shows the initial design.
The second photo shows the initial color scheme, while the third photo shows the final color scheme.
The fourth photo shows the favicon.
The idea behind this assignment was to experiment with event-driven Functional Reactive Programming in Elm. Event-driven Functional Reactive Programming deals with handling events like time passing, mouse clicks, key presses, etc.
The assignment called for us to estimate the value of pi using the Monte Carlo method. The idea is that you draw a square and then a circle within the square. Then, as time elapses (you choose how frequently), you randomly generate points within the square. You count the number of points within the circle ("Hits") and the number of points outside the circle ("Misses").
You estimate the value of pi by dividing the number of Hits by the total points generated. Then you multiply this number by 4.
So, in the example shown in the photo, pi is calculated by dividing 1557 (Hits) by 1994 (Number of Points Generated) and then multiplying that result by 4 to obtain 3.12337...
If you want a full explanation and derivation of this formula, check out the site below:
> http://polymer.bu.edu/java/java/montepi/MontePi.html
I used the "Random" library to generate the points. To draw the square and circle, I used the "Graphics.Collage" library, which is outdated in the most recent version of Elm (it no longer exists in the "Core" package -- version 4.0.5 at the time of writing this description).
I used core version 3.0.0:
> http://package.elm-lang.org/packages/elm-lang/core/3.0.0
To get the pi symbol to show up in the background, I implemented a series of ranges using if-else statements.
Unfortunately, I cannot post the source code for this assignment due to university policy (since this assignment may be reused).
TECHNOLOGY:
> Elm:
http://elm-lang.org/
> Functional Reactive Programming (FRP) in Elm (which sadly no longer exists. Check out the article below.):
http://elm-lang.org/blog/farewell-to-frp
At the time of this assignment, we were studying how to represent binary trees. We would introduce a new type -- for example:
type Tree = Empty | Node Int Tree Tree
I recall this project being heavy on recursion, since all functional programming methods to traverse trees of this sort use recursion. There were two parts to this assignment -- the first part called for us to implement functions that traversed and created trees. We would then use some of these functions in the second part of the assignment, in which we would draw either an almost complete or complete tree. With a mouse click, a new node would be added to the tree.
Drawing the tree was quick tricky because I had to first consider how to represent the tree and restart the tree when a certain number of nodes were reached. I ended up representing the state as a list of trees, and I would cycle through this list. While the animation gives the illusion of adding a node to an already existing tree, the truth is that with each mouse click, I draw an entire new tree with one additional node. Note that this animation, like the Pi animation, also utilizes event-driven Functional Reactive Programming in Elm.
The actual drawing part was difficult as well, mostly due to positioning. Initially, I thought the distance between nodes at every level should remain fixed. However, quite quickly into this project, I realized that keeping the horizontal distance constant on each level of the tree would lead to overlaps. I proceeded to half the horizontal distance between nodes with each additional level. I increased the vertical distance by 20% with each additional level.
Unfortunately, I cannot post the source code for this assignment due to university policy (since this assignment may be reused).
TECHNOLOGY:
> Elm:
http://elm-lang.org/
> Functional Reactive Programming (FRP) in Elm (which sadly no longer exists. Check out the article below.):
http://elm-lang.org/blog/farewell-to-frp