<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on Madeline Colbert's Website</title><link>https://maddiecolbert.com/post/</link><description>Recent content in Posts on Madeline Colbert's Website</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Tue, 10 Mar 2026 11:36:11 -0500</lastBuildDate><atom:link href="https://maddiecolbert.com/post/index.xml" rel="self" type="application/rss+xml"/><item><title>IOSC</title><link>https://maddiecolbert.com/post/iosc/</link><pubDate>Tue, 10 Mar 2026 11:36:11 -0500</pubDate><guid>https://maddiecolbert.com/post/iosc/</guid><description>&lt;h1 id="linear-regions"&gt;Linear Regions&lt;/h1&gt;
&lt;p&gt;I talked a fair bit about this idea of a binary vector that results from embedding a ReLU activated neural network into a MIP. This is denoted as $\mathscr{Z}(x)$ throughout the presentation. But where does that come from?&lt;/p&gt;
&lt;p&gt;Since the ReLU function is just the max between 0 and its input, we can model this using big-M constraints, $out \ge 0, out \ge in, out \le in + Mz, out \le M(1-z)$.&lt;/p&gt;</description></item><item><title>Linear_regions</title><link>https://maddiecolbert.com/post/linear_regions/</link><pubDate>Mon, 09 Mar 2026 14:27:32 -0500</pubDate><guid>https://maddiecolbert.com/post/linear_regions/</guid><description>&lt;p&gt;The following is the general MIP formulation of a ReLU function
$$ out \ge in, out \ge 0$$&lt;/p&gt;</description></item><item><title>First_post</title><link>https://maddiecolbert.com/post/first_post/</link><pubDate>Mon, 09 Mar 2026 11:26:52 -0500</pubDate><guid>https://maddiecolbert.com/post/first_post/</guid><description>&lt;h1 id="this-is-a-first-post"&gt;This is a first post&lt;/h1&gt;
&lt;p&gt;This is my first post using hugo&lt;/p&gt;</description></item></channel></rss>